The Third Third
 

Flirting with Claude

Soooooo, I’ve been flirting with Artificial Intelligence (AI). Specifically, with Anthropic’s AI, Claude, in a Claude Pro Plan that was a gift from my husband. I know this sounds wrong: My husband gives me something/someone called Claude to flirt with? But it’s not like that at all. 

Claude is not a man, not even a human, much less “the other man” in my life. Claude is an “it,” if anything, the AI-fueled “office wife” or executive secretary I never had. And the flirting? Well, yes, there’s an attraction, a curiosity about how far a relationship with AI can go - professionally — and no real serious intent. All light-hearted fun, albeit in the context of a brave new world I want to explore, one that’s marrying thinking with technology in ways I haven’t begun to understand.


 In the last four months, I have asked Claude to act like a super-powered search engine for me, to summarize plots of books I’ve read but don’t remember enough to discuss at the next book club, to create a gift certificate for a grandson’s new bike, to show me how I could transition from a website blog to a SubStack. The responses have provided varying degrees of satisfaction and, yes, delight. Reporter that I am, I always ask for specific sources and cites. Writer that I am, I never ask for Claude to compose or edit anything beyond the conversation we have about my queries. Indeed, the specter of Claude or any AI instrument being better at writing than I am (quicker, more precise, more poetic, more fluid, more readable, more thoughtful — Aarrghhh!) hangs over every overture I make to this snowflake-shaped app on my phone and my computer. 

 So when I decided to advance the relationship and test Claude’s creative ability, I avoided words altogether. Instead I asked Claude to design an avatar for me to use like a headshot on my thethirdthird.com columns, a sketch of someone like me (at a computer, playing bridge, cooking, talking with friends, traveling, reading, exercising, a dozen images in all). I provided what I thought was a fairly detailed and accurate description of myself, and Claude asked a few questions about style and palette, thanked me for including a photo of myself, and went to work, posting a note that this was a multi-stepped process and that it would take some time to produce an image. 

 It didn’t take very long, though, before a cartoonish image appeared, sitting in front of a computer. Round face, vacant over-sized eyes, terrible haircut hanging down unevenly and silver, not blond, a boxy torso, and everything front-on, with no angle to see that, yes, actually, she was sitting at a desk in front of a computer. 

 Claude had let me down. I was disappointed. This is how I present in the world of artificial intelligence? But wait, this was just Claude’s first effort, and AI should be tireless, right? So we worked, Claude and I, to tweak the image to look more, well, more professional, classy, dynamic, more like I want my avatar to appear. It was a struggle. Claude was certainly willing, but not particularly able. 

 I took a break, because I was tired of looking at this ridiculous caricature, and sensitive about being too critical toward Claude, as if, were I to offend the machine, it would never produce the updated graphics I wanted for my site. 

 Refreshed by a manicure (!), I returned to my computer and fessed up. This just wasn’t what I had in mind. We had to go back to Square One and create something more sophisticated, thoughtful, energetic and blond-with-a-good-haircut! Claude mentioned maybe something more New Yorker style. That’s it, I typed, and waited again. The next iteration was an improvement, not great, really, but something I thought I could work with. But I couldn’t shake the disappointment. 

 Back to the flirting metaphor: I was putting my best self forward and Claude clearly wasn’t on the same wavelength. I found myself trying a little too hard to tease out of the program something I could genuinely like. It felt sort of like a date that had seemed to have great potential on paper, but in the flesh, well, not so much. A calculation was in order: how hard was I willing to work to realize the potential I had hoped but was no longer sure was there? 

 Damn. I had mentally anthropomorphized my interaction with AI, after all (Why would anyone ever think AI had any human characteristics?!!), managed it the way one might attempt to manage household help or tradespeople undertaking a task, being careful to not seem bossy or arrogant or condescending or too demanding, or inconsiderate or entitled. The cost of doing so was a bit of embarrassment, plus some maddening and inappropriate emotional insecurity. Why, I worried, wasn’t Claude seeing me as I wanted to be seen? Was there something wrong with the way I present myself to the world? Don’t I look and sound like a reasonably intelligent and stylish professional in her seventies? Was it just Claude? Or was it me? 

 Fortunately, before Claude could get “sketching” again, Anthropic informed me I had “hit my limit” of interactions with Claude, forcing me to disengage, or pay up for the privilege of insisting Claude could do better. It was good to take a step back and intellectualize the whole experience. 

 I’m now thinking some of the avatar “headshots” will actually work, since no one reading this sees the real me, anyway, and they may, in fact, end up being fairly charming with a few more edits. More importantly, I have started to learn how an AI tool can be used, and that I should never take it too seriously or at all personally. And with AI, as in “real life,” I need to be much better at “asking for what I want.” Lessons learned.
comments powered by Disqus