My First Month: What I got wrong about AI
A brutally honest look at my assumptions, my mistakes, and what I wish someone had told me before I started.
"Have you tried AI?"
My manager asked me this for the third time in two weeks. Each time, I nodded and said I'd look into it. Each time, I opened my laptop, stared at my screen, and had absolutely no idea where to start.
Actually, that's not quite true. I knew where to start. I just didn't want to.
Because here's what I really thought: using AI felt like cheating.
I'm a Six Sigma Black Belt and KCS-certified Knowledge Management Program Manager with over 20 years in business operations. I was proud of spending hours researching, analyzing, and writing everything myself. That's what showed my value. That's what my expertise meant.
If I handed my work over to AI, what was I even contributing?
My manager kept bringing it up anyway. He's a big proponent of AI, and he could see I was drowning in a particular project. His suggestions were gentle but persistent—not pushy, just... there. A reminder that there might be tools that could help.
I'd see emails about AI land in my inbox with subject lines like "10 Ways AI Will Transform Your Workflow," but I never opened them. They felt like one more thing to add to an already overwhelming to-do list. More importantly, they felt like an invitation to cheat.
But the project wasn't getting easier. And my manager kept checking in.
That's where everything went sideways.
The Breaking Point: When Exhaustion Wins
Here's what I was working with: five or six Google Docs scattered between my personal Drive and a shared drive. Some I'd created myself. Others had been developed by the team. All of them contained information about a new organizational structure we were implementing.
My task was to analyze these documents and find where roles and responsibilities overlapped, where there were gaps, and how everything fit together. The goal was strategic—I needed to make a case to leadership about how different parts of the organization could work together more efficiently.
It was exactly the kind of high-level analysis I'd always done manually. The kind that required my expertise, my judgment, my 20 years of experience.
My manager had suggested I try GitHub Copilot weeks ago. "It's what we're approved to use here," he'd said. I'd signed up for it—the company-sanctioned option—but I hadn't actually used it.
Because it still felt like cheating.
Instead, I opened all six documents. I started making notes. I created a comparison spreadsheet. I highlighted overlapping sections.
Three hours in, my eyes were crossing. I was terrified of missing something important in my fatigue. The deadline was looming. And I was exhausted.
That's when I finally gave in.
Pure desperation. That's what it took.
Mistake #1: Not Knowing How to Ask the Question
GitHub Copilot has chat functionality where you can talk to different AI models. I selected Claude—I'd heard it was good with documents. I copied text from my first Google Doc, pasted it in, and stared at the screen. The AI was right there, ready to help. I just needed to... ask it something?
I typed: "What does this say?"
I got back a response. Something generic about the document containing organizational information. Technically accurate, but completely unhelpful.
I tried again: "Summarize this."
This time I got a wall of text that was, again, technically accurate but told me nothing I didn't already know from reading the document myself. It regurgitated the content back to me in different words.
I felt profoundly stupid. I couldn't even get it to give me useful insights. I closed my laptop and went to make coffee.
The problem wasn't the tool. The problem was me. I was treating AI like a search engine, throwing vague requests at it and expecting it to read my mind about what I actually needed.
Mistake #2: Not Recognizing When AI Was Making Things Up
After a few more frustrating attempts, I stepped back and tried to articulate my actual task:
I need to analyze these documents and identify where roles and responsibilities overlap, where there are gaps in coverage, and how these different pieces fit together strategically.
Once I wrote that down, I realized I could just... tell the AI that.
So I uploaded another document and wrote:
"This document outlines part of our new organizational structure. I need to identify areas where the responsibilities described here overlap with or complement roles described in other documents. Can you highlight key responsibilities and note any areas where things seem unclear or might create redundancy?"
Finally, I got what felt like a useful response. The AI identified several areas where responsibilities seemed to overlap across documents—things like "process documentation," "quality review," and "cross-team coordination."
I copied the insights into my notes and moved on to the next document. Progress! This was exactly the kind of pattern identification I needed.
Except... something felt off.
As I went through the AI's analysis of the third document, I noticed it making claims about a "tiered escalation process" that seemed oddly specific. I went back to the original document. That process wasn't there. Not exactly, anyway. The document mentioned escalation pathways, but the AI had described a three-tier system with specific timeframes that didn't exist in my source material.
This is where my experience with Knowledge Centered Service (KCS) saved me. In KCS, accuracy is everything. You learn to verify information, check sources, and catch when something sounds plausible but isn't quite right. That attention to precision—that habit of asking "wait, did the document actually say that?"—kicked in.
I went back through the previous analyses. Sure enough, the AI had been... creative. It hadn't made up complete fabrications, but it had filled in gaps with reasonable-sounding details that weren't actually in my documents. Some of the "overlaps" it had identified weren't as clear-cut as it made them sound. It had inferred connections, assumed similarities, and extrapolated points that I had never written.
In the AI world, this is called "hallucination." The AI generates content that sounds confident and coherent but isn't based on the source material. And if I hadn't been trained to verify information, I might have presented those weak connections to leadership as if they were solid—completely undermining my credibility and the case I was trying to build.
This was my wake-up call: AI doesn't just need good prompts. It needs a human who knows enough to catch when it's wrong.
The Turning Point: Learning to Verify
Once I understood that AI could hallucinate, my entire approach changed. I stopped taking its responses at face value and started treating them as a starting point—something to verify, not something to trust blindly.
I developed a process:
Ask the AI a specific question about a document
Read the response critically
Go back to the source document and verify every claim
Note what was accurate and what was inference or fabrication
Refine my prompt to be more explicit about sticking to source material
I started adding phrases like "based only on what's explicitly stated in the document" and "do not infer or extrapolate beyond what's written." The responses became more conservative and, paradoxically, more useful. The AI stopped trying to be helpful by filling in gaps and started being helpful by highlighting what was actually there—and what wasn't there.
When I asked it to compare two documents, I'd follow up with: "Are there any differences you noted that require you to infer meaning rather than compare explicit statements?" This forced the AI to distinguish between what it could prove and what it was guessing.
It was more work than I'd expected. The AI didn't magically do my job for me. But it did help me process information faster once I learned to use it as an analytical partner rather than an oracle.
The Revelation: It Wasn't Cheating After All
Here's what changed my entire perspective:
The AI gave me the initial analysis in minutes instead of hours. But I still had to:
Verify every single point against the source documents
Apply my 20+ years of expertise to validate the logic
Make all the strategic decisions about what mattered most
Use my judgment about what to present to leadership
The AI didn't do MY work. It did the tedious comparison work so I could focus on the strategic thinking that actually requires my expertise.
The work I'd been spending three hours on—reading, highlighting, copying text into a spreadsheet—that didn't need 20 years of business operations experience. A junior analyst could do that. Or an AI.
But the synthesis? The strategic judgment? The ability to catch when something sounds plausible but isn't quite right? That's where my value lives.
I'd been spending my most valuable hours on the least valuable tasks.
It wasn't cheating. It was working at the level I'm actually paid to work at.
What I Learned: The Real AI Basics
After a month of stumbling through this, here's what I wish someone had told me from the beginning:
1. Specificity is everything. "Summarize this" gets you a summary. "Based on what's explicitly stated in this document, identify the key roles and responsibilities, and highlight any areas where responsibilities are ambiguous or undefined" gets you actual insights. The more specific your question, the more useful the answer.
2. AI will confidently make things up. This is not a bug. It's not a sign you're using the wrong tool. It's how AI works. It generates plausible-sounding text based on patterns, and sometimes those patterns lead it to fill in gaps with information that isn't there. You need domain expertise to catch this.
3. Your expertise matters more than ever. I thought AI would replace my need to understand the content deeply. Actually, it's the opposite. My KCS background—my training in verifying information and maintaining accuracy—was the only reason I caught the hallucinations. Without domain knowledge, I would have passed along fabricated details as facts.
4. Always verify against source material. Every claim the AI makes should be traceable back to your documents. If you can't find it in the source, assume the AI inferred it. This doesn't mean the inference is wrong, but it means you need to decide if it's reasonable, not the AI.
5. Prompt for honesty, not helpfulness. Add phrases like "based only on the source material" or "if this information isn't in the document, say so" to your prompts. The AI wants to be helpful, which sometimes means it tries to give you complete answers even when the complete information isn't available.
6. Context is your friend. Instead of asking one giant question about all your documents at once, break it down. Analyze one document thoroughly, verify the insights, then move to the next. Build up your understanding piece by piece rather than trying to process everything at once.
7. You're allowed to iterate. My first prompts were terrible. My tenth prompts were better. My twentieth prompts actually got me useful information that I could verify. That's normal. You're not supposed to be an AI expert on day one.
What Still Feels Messy
A month in, I'm not an AI wizard. I still write prompts that don't quite work. I still sometimes use the wrong tool or forget to give enough context. I still have moments where I think, "This would be faster if I just did it myself."
But here's what's changed: I'm not paralyzed anymore. When my manager suggests trying AI for something, I don't freeze. I think about what I'm trying to accomplish, pick a tool that might work, and try something. If it doesn't work, I adjust and try again.
The barrier to entry isn't technical skill—it's permission to be bad at something for a while. It's the willingness to type a clumsy prompt, get a weird answer, and try again instead of giving up.
The Advice I'd Give My Month-Ago Self
If I could go back and talk to the version of me who was frantically trying to figure out how to use GitHub Copilot at 11 PM, here's what I'd say:
Start with whatever tool you have access to. If your company has approved AI tools, use those. Don't worry about whether you have the "best" AI. Focus on learning how to work with AI in general.
Your first ten prompts will be vague. That's fine. Everyone's are. The only way to get better is to try, see what happens, and adjust. Write down what you're trying to accomplish in plain English before you even start prompting.
Assume the AI is guessing. Not always, but sometimes. Your job is to figure out when. If it says something that sounds oddly specific or complete, go back to your source and verify. If you can't find it, the AI probably inferred it.
Your expertise is your superpower. Whatever you know deeply—whether it's KCS, finance, HR processes, legal compliance, anything—that knowledge is what lets you catch AI mistakes. Don't think of AI as replacing your expertise. Think of it as a tool that only works well because you have expertise.
Don't expect magic. AI won't read your documents and automatically solve all your problems. It's a tool for processing information faster, not a replacement for your thinking. But it's a really useful tool once you learn how to verify its outputs.
You're not behind. Everyone using AI went through this same awkward learning phase. They just don't talk about it. The difference between them and you isn't that they're smarter—it's just that they've already gotten through their month of feeling like they don't know what they're doing.
Where I Am Now
I still don't use AI for everything. I still do plenty of work the old-fashioned way—reading documents myself, thinking through problems without algorithmic assistance, having actual conversations with actual humans.
But when I have multiple documents I need to cross-reference? I use AI to help me identify patterns and potential overlaps—then I verify every single claim against the source material. When I'm trying to spot gaps in a process? I ask AI to analyze the structure—then I use my own judgment to determine if the gaps it identified are real or inferred.
When I need to quickly understand a new policy document? I ask the AI to break down the main points in plain language—then I read the actual document to make sure the summary is accurate.
The AI has become a tool in my toolkit for processing information faster. But my expertise, my critical thinking, my ability to verify—those are what make the tool useful. Without them, I'd just be copying and pasting hallucinations.
And the best part? I'm not scared of it anymore. I'm not intimidated by prompts. I understand that the AI is a language model that sometimes makes things up, and I know how to catch it when it does.
That might not sound like a huge accomplishment, but for someone who spent weeks avoiding AI because I didn't know where to start, it feels like progress.
If you're in that place right now—reading articles about AI, feeling like you should be using it, but having no idea how to start—I want you to know: you don't have to be an expert. You just have to be willing to learn two things: how to ask good questions, and how to verify the answers.
Your expertise in your field isn't obsolete because of AI. It's more valuable than ever. It's what lets you know when the AI is right and when it's confidently making things up.
Pick a tool. Ask a specific question. Verify the answer against your source material. Adjust your approach based on what you learn. It probably won't work perfectly the first time. That's okay.
You're not behind. You're exactly where everyone started.
And here's something I didn't expect: learning to ask AI better questions taught me something completely unexpected about how I'd been communicating with actual humans for the past 20 years. But that's a story for next week.
Coming next: How learning to "prompt" AI revealed a communication problem I didn't know I had—and accidentally fixed my emails, meetings, and delegation in the process.

