5 AI Prompts That Saved Me 10 Hours This Week
I tested 20 different AI prompts for common business tasks. Here are the five that actually worked—and how you can use them too.
I tested 20 different AI prompts for common business tasks. Here are the five that actually worked—and how you can use them too.
Last week, I wrote about how learning to prompt AI taught me I'd been communicating poorly for 20 years. The response was overwhelming—dozens of messages asking: "Okay, but what exactly do you type?"
So this week, I did an experiment. I committed to using AI for every repetitive or time-consuming task that came across my desk. I tested different prompts, tracked my time, and documented what worked and what spectacularly didn't.
Here's what I learned: About 70% of my attempts were mediocre. About 20% were complete failures that created more work than they saved. But about 10%? Those were game-changers.
Those 10% saved me roughly 10 hours this week. That's not an exaggeration—I literally tracked my time.
Here are the five prompts that made the biggest difference, written in a format you can actually use. Each one includes the exact prompt I used, how to customize it for your needs, and what to watch out for.
Prompt 1: Turn Strategy Documents into Action Plans
What It Does:
Extracts actionable next steps from long strategic documents and creates clear calls-to-action for different stakeholders.
The Scenario:
I had a strategic planning document with multiple initiatives. I needed to translate these into specific actions for managers to take, but the document was dense and the actions were buried in paragraphs of context.
The Prompt:
Based on the initiatives in this document, provide clear calls-to-action for managers. For each action: 1. State what needs to be done 2. Identify who should do it 3. Suggest a realistic timeline [paste your document here]
How to Use It:
Start with your strategic document, planning doc, or meeting notes
Paste the content into your AI tool
Review the AI's suggestions for accuracy
Ask follow-up questions if anything needs clarification (I did: "Action #3 isn't clear—can you specify what 'review processes' means?")
IMPORTANT: If you notice vague actions, ask AI to clarify them before sending to your team
What Happened:
AI generated the initial action list in about 30 seconds. But I noticed some actions were vague—things like "review processes" without specifying which processes or what to review for. So I asked AI to clarify, and it gave me more specific, actionable items.
Then I asked AI to create an infographic that captured the CTAs visually. I ended up not using the infographic (it was generic), but having the list saved me from manually reading through a 15-page document multiple times.
Time saved: 2 hours (would have taken me all afternoon to read, extract, and organize these actions manually)
Prompt 2: Find Solutions for Non-Standard Situations
What It Does:
Helps you think through edge cases or situations that don't fit your standard processes.
The Scenario:
Within our support organization sits the operations team. This team doesn't work on customer cases, so they don't have metrics for things like Link Rate or Link Accuracy. However, they write knowledge base articles from time to time.
I needed to figure out how these team members could advance in our KCS competency framework when they don't do the typical work that framework measures.
The Prompt:
[Explain your situation and context in detail] As a [your role], identify ways to address this gap. Does [your proposed solution] make sense? Ask 2-3 clarifying questions if needed to help you formulate a plan.
My Actual Version:
Within the support org sits the operations team. This team doesn't work on cases and therefore won't have metrics for Link Rate, Link Accuracy, and PARs. However, the team will write KB articles from time to time. 5 out of the 9 candidates are from the Ops team. As a KCS Program Manager, identify ways that these team members can move up the KCS competency ladder. Does it make sense to move these people up the competency ladder? Ask 2-3 questions if needed to help you formulate a plan to address this gap.
How to Use It:
Explain your full situation with all relevant context
Frame your specific question or challenge
Explicitly ask AI to ask clarifying questions—this forces it to think through what information might be missing
Answer AI's questions to get more tailored recommendations
What Happened:
AI asked really good clarifying questions about the team's knowledge contribution patterns, the goals of the competency framework, and what metrics we could use beyond case-related ones. This helped me think through aspects I hadn't considered.
The recommendations were solid: alternative metrics like article quality scores, peer reviews, and proactive knowledge creation. Some I implemented, some I adapted, but the thinking process AI guided me through was valuable.
Time saved: 90 minutes (would have spent that time brainstorming alone or scheduling multiple meetings to talk it through)
Prompt 3: Convert Meeting Transcripts into SOPs
What It Does:
Takes messy meeting transcripts and transforms them into clean, structured standard operating procedures.
The Scenario:
I had a Zoom transcript from a training call where we walked through a new process. Instead of writing an SOP from scratch, I wanted to use the transcript as the basis.
The Prompt:
Convert this meeting transcript into a step-by-step Standard Operating Procedure (SOP). Format it as: 1. Purpose/Overview 2. Step-by-step instructions (numbered) 3. Key considerations or warnings 4. Related resources (if mentioned) Remove conversational elements and focus only on the procedural content. [paste cleaned transcript here]
How to Use It:
Get your Zoom/Teams/Google Meet transcript
CRITICAL STEP: Remove timestamps and "Speaker 1/Speaker 2" labels first—AI gets confused by these
Clean up obvious conversation fillers ("um," "you know," etc.) if there are many
Paste into AI with the prompt
Review the output for accuracy—AI sometimes misses important context or nuance
Edit to add your organizational specifics
What Happened:
First attempt failed because I left the timestamps in. The AI tried to incorporate them into the SOP, which was nonsensical.
Second attempt (after removing timestamps) worked beautifully. AI pulled out the key steps, organized them logically, and even identified warnings that we'd mentioned casually in conversation.
I still had to edit—AI didn't know our internal tool names or specific team structure. But it cut my drafting time dramatically.
Time saved: 3 hours (writing SOPs from scratch is tedious; this gave me a 70% complete draft to refine)
Prompt 4: Intelligent Email Triage and Cleanup
What It Does:
Helps you quickly identify which emails need immediate attention and which can be archived or deleted, especially when you're drowning in an overflowing inbox.
The Scenario:
I came back from a long weekend to 147 emails. I needed to quickly identify what actually needed my attention versus what could wait or be deleted.
The Initial Prompt:
Review these email subject lines and senders and identify which require immediate attention based on [your criteria: deadlines, sender priority, action required, etc.]. [paste list of emails, or give AI access to your inbox if your tool supports it]
What Happened (And the Follow-Up):
I used this with Gemini since it can access my Gmail. First, I asked it to show me emails requiring immediate attention. It responded that nothing was urgent, but suggested several emails that might be of interest.
This is where the prompt got interesting. I asked a follow-up:
Show me the dates of the emails you suggested I look at.
This simple follow-up was genius (if I do say so myself). Seeing the dates helped me decide: if it was from last week and nobody followed up, it's probably not actually important. If it's from today, I should look.I went further:
For emails older than 3 days that I haven't responded to, suggest which I can safely archive versus which might need a brief follow-up.
AI helped me identify:
34 emails I could immediately archive (newsletters I'd been meaning to read but never would)
12 emails that needed a quick "sorry for the delay" response
8 emails requiring actual work
The rest could wait
How to Use It:
Give AI access to your emails (if using Gemini) or paste subject lines and senders
Ask it to identify urgent/important emails based on your criteria
Key step: Ask for dates of suggested emails
Use date information to make fast archive/delete/respond decisions
For older emails, ask AI to categorize: needs response, can archive, or delete
Pro tip:
Be specific about YOUR definition of urgent. "Urgent for me means: from leadership, mentions a deadline, or is a direct question requiring my input."
Time saved: 2.5 hours (turned what would have been half a day of email processing into 45 minutes)
Prompt 5: Draft Project Delegation Messages
What It Does:
Creates clear, complete delegation messages that include all the context someone needs to take on a task successfully.
The Scenario:
I needed to delegate a research project to a team member but kept putting it off because I dreaded writing a clear brief. (This was before I learned that being vague was my problem, not theirs.)
The Prompt:
Help me write a delegation message for [task/project] to [team member name or role]. Include: - Clear objective (what success looks like) - Context (why this matters) - Specific deliverables - Timeline and milestones - Resources available - When/how I want updates - Decision points where they should check in with me Keep it friendly but clear. Here's the project: [describe what you need done]
How to Use It:
Think through what you actually need (this is the hard part—AI forces you to be clear)
Fill in the project description with as much detail as you have
Let AI structure it into a proper brief
Review and personalize—add any team-specific context AI wouldn't know
Before sending, ask yourself: "Could they start this without asking me any clarifying questions?"
What Happened:
I realized halfway through writing the prompt that I hadn't actually thought through what success looked like. AI couldn't help me until I was clear.
Once I figured out my actual requirements, AI structured them into a much better delegation message than I would have written. It included decision points I hadn't thought about and organized the information logically.
The team member started the project immediately with no clarifying questions. That never happens.
Time saved: 45 minutes (would have spent that writing a vague message, then 20 minutes on Slack clarifying what I meant)
The One That Failed Spectacularly
Not every prompt works. Here's one that wasted my time:
The Failed Prompt: "Make This Sound Professional"
Make this email sound more professional: [paste draft]What went wrong: AI made it "professional" by making it stiff, formal, and removing all personality. It sounded like a corporate robot wrote it. I sent it anyway (mistake), and got a response asking if I was okay because I "sounded unlike myself."
The lesson: "Professional" is too vague. AI defaulted to formal business speak from the 1990s.
Better version:
Rewrite this email to be clear and respectful while maintaining a warm, collaborative tone. Keep it concise but not terse. Specificity matters. Always.
What I Learned About Prompts This Week
After testing 20+ prompts, here are the patterns that separate the time-savers from the time-wasters:
Good prompts:
Include the format you want ("numbered steps," "bullet points," "paragraph summary")
Specify your constraints ("in 100 words or less," "by Friday")
Define your terms ("urgent for me means...")
Ask for questions if information is missing
Request specific structure or organization
Bad prompts:
Use vague adjectives ("make it better," "be professional")
Assume AI knows your context
Don't specify output format
Treat AI like a mind reader
Forget to verify accuracy
The Real Time-Saver:
The prompts themselves saved me maybe 2-3 hours. The other 7-8 hours came from finally being clear about what I needed BEFORE I typed anything. AI forced me to think through my requirements, which made everything faster—even when I didn't end up using AI's output.
How to Get Started
If you're new to this, don't try all five prompts this week. Pick ONE scenario that sounds familiar and try it.
My recommendation? Start with Prompt 4 (email triage) if your inbox is overwhelming, or Prompt 1 (action plans) if you have a document that needs to become action items.
Use the prompt exactly as written first. See what happens. Then modify it based on what you need.
And remember: these prompts work because they're specific about format, context, and desired outcome. That's not AI magic—that's just good communication.
The AI can't read your mind. But if you tell it exactly what you need, it's surprisingly good at helping you get there.
Coming next: Now that I've been using AI for a few weeks, I'm noticing something interesting—I'm NOT using it for everything anymore. Some tasks are actually faster without it. Next week, I'll share what I stopped using AI for and why.
Want these prompts in a downloadable format? click here
How AI Taught Me I'd Been Communicating Poorly for 20 Years
I thought learning to "prompt" AI was just about getting better answers. Instead, it revealed a communication problem I didn't know I had—and accidentally fixed my emails, meetings, and delegation in the process.
I thought learning to "prompt" AI was just about getting better answers. Instead, it revealed a communication problem I didn't know I had—and accidentally fixed my emails, meetings, and delegation in the process.
Last week, I wrote about my first month with AI—how I went from thinking it was "cheating" to realizing it could actually help me work at the strategic level I'm paid for.
What I didn't mention was what happened next.
After my initial breakthrough with document analysis, I got... enthusiastic. I started using AI for everything. Weekly reports. Email responses. Meeting prep. Project briefs.
And every single attempt was frustrating in a different way.
The AI would give me responses that were technically correct but completely missed the point. Or it would produce generic summaries when I needed strategic analysis. Or it would focus on the wrong aspects of a problem entirely.
I assumed the AI was just... not that smart.
Turns out, the problem was me.
My First Terrible Prompt
Here's what I typed when I wanted to consolidate those six organizational documents I mentioned last week:
"Consolidate these 6 documents and find the similarities."
Simple, right? Clear instruction. The AI should know what to do.
What I got back was a wall of text that basically said: "These documents all discuss organizational structure. They share common themes around roles, responsibilities, and processes. Here are some areas of overlap..."
Technically accurate. Completely useless.
I tried again with slight variations:
"Summarize the key points across these documents."
"What are the main themes here?"
"Compare these documents."
Every response was similarly generic. I was getting increasingly frustrated. The AI clearly couldn't understand what I actually needed.
The Moment Everything Changed
After about the fifth failed attempt, I stopped and asked myself: What am I actually trying to accomplish here?
Not "summarize documents." Not "find similarities." Those were tasks, not outcomes.
What I really needed was:
Identify which roles and responsibilities appear across multiple documents
Flag where different documents have conflicting requirements
Note gaps where something important seems to be missing
Organize everything by priority so I could see what needed immediate attention versus what could wait
Highlight anything that would require a leadership decision to resolve
Once I articulated that, I rewrote my prompt:
"Analyze these 6 policy documents for:
Core similarities in approach
Overlapping roles and responsibilities that could be consolidated
Conflicting requirements that need resolution
Gaps in coverage Organize by priority and flag items requiring leadership decision."
Suddenly, I got back something I could actually use. Strategic analysis. Specific call-outs. Actionable insights organized in a way that made sense.
The AI hadn't gotten smarter. I had gotten clearer.
The Uncomfortable Realization
But here's where it gets interesting—and uncomfortable.
A few days later, I was writing an email to my team about an upcoming project. I hit send, then went back to working on an AI prompt for a different task. As I was carefully articulating exactly what I needed from the AI, a thought struck me:
Why am I being more specific with the AI than I am with my actual team?
I opened that email I'd just sent. It said something like: "Can you pull together the Q3 data and send me a summary? Thanks."
My team member would have to guess:
Which Q3 data? (We track about fifteen different metrics)
What format for the summary? (Spreadsheet? Report? Bullet points?)
What's the deadline?
What am I planning to do with it? (This context would help them know what to prioritize)
How detailed should it be?
I'd been assuming they could read my mind. Just like I'd been assuming the AI could read my mind.
The difference was that the AI had forced me to stop making that assumption. It couldn't guess what I meant. It could only respond to what I actually said.
What Happened When I Applied This to Everything
I started an experiment: What if I communicated with humans using the same precision I'd learned to use with AI?
Emails: Instead of: "Can you review this and get back to me?" I wrote: "Can you review this proposal for technical accuracy and flag any budget concerns? I need your feedback by Friday COB so I can incorporate changes before the Monday stakeholder meeting."
Response time dropped significantly. Back-and-forth clarification emails nearly disappeared.
Project Briefs: Instead of a vague paragraph about objectives, I started structuring them like AI prompts:
Specific deliverable
Success criteria
Constraints and requirements
Context for why this matters
Decision points that need my input
My team started asking fewer clarifying questions. Projects moved faster.
Meeting Agendas: Instead of: "Discuss Q4 planning" I wrote:
10 min: Review current Q3 status (where we stand on timeline and budget)
20 min: Identify Q4 priorities (decision: which 3 initiatives get resources?)
15 min: Flag blockers (decision: what needs to be resolved this week?)
5 min: Assign action items
Meetings stayed on track. Decisions actually got made.
Delegation: Instead of: "Can you handle the client report?" I said: "Can you draft the client report using last quarter's template? Focus on the metrics they specifically asked about in the last meeting—response time and resolution rate. I'll need the draft by Thursday to review before Friday's call. Flag anything where the data looks unusual or incomplete."
Less back-and-forth. Better results. Less need to redo work.
The Business Impact
I started tracking this, because I'm a process nerd and that's what we do.
After two weeks of applying "prompt engineering" principles to all my communication:
Weekly reports: Down from 45 minutes to 12 minutes (I still review and refine AI-generated drafts, but the initial creation is much faster)
Email admin time: Cut by 60% (fewer clarification emails, faster responses from others)
Meeting prep: 30% more efficient (clearer agendas meant less time figuring out what we were actually trying to accomplish)
Delegation rework: Reduced by about 40% (clearer instructions upfront meant less need to redo work)
Total time gained for actual strategic thinking: About 8 hours per week.
Eight hours. That's a full workday I'd been losing to poor communication.
The Real Lesson: AI Didn't Fix My Communication
Here's what's wild about this: The AI didn't fix my communication. It just revealed how broken it was.
For 20 years, I'd been working with smart, capable people who somehow managed to translate my vague requests into actual results. They were doing the mental work of figuring out what I probably meant.
I thought I was being efficient by keeping things brief. "They're professionals," I told myself. "They don't need me to spell everything out."
But what I was actually doing was:
Pushing cognitive load onto other people
Creating opportunities for misalignment
Generating unnecessary back-and-forth
Wasting everyone's time
The AI couldn't compensate for my vagueness. It just reflected my unclear thinking back at me until I fixed it.
And once I fixed it for the AI, I realized I should probably fix it for the humans too.
The KCS Connection
The irony isn't lost on me.
I'm a KCS-certified Knowledge Management Program Manager. I literally teach people how to structure information so it's clear, findable, and useful. I've spent years helping organizations improve their knowledge systems.
And apparently, I'd never applied those principles to my own day-to-day communication.
In KCS, we talk about the importance of context, structure, and clarity. We emphasize that good knowledge isn't just accurate—it's usable. It anticipates what the audience needs to know and provides that information proactively.
I knew all of this intellectually. But I wasn't living it in my daily work.
The AI made me live it. Because the AI doesn't let you get away with assumptions.
What I Learned About Prompt Engineering (That Has Nothing to Do with AI)
After a few weeks of this, here's what I understand about "prompt engineering"—for AI or humans:
1. Specificity is kindness.
Being vague doesn't save time. It shifts the burden of figuring out what you mean onto someone else. Being specific upfront is more efficient for everyone.
2. Context changes everything.
"Review this document" produces different results than "Review this document for technical accuracy before the Monday stakeholder meeting." The context tells people what to focus on and how to prioritize their effort.
3. Structure reduces cognitive load.
Numbered lists. Clear sections. Explicit questions. These aren't just formatting choices—they're ways to make information easier to process and act on.
4. Assumptions are expensive.
Every time you assume someone knows what you mean, you risk misalignment. The cost of being explicit is five extra seconds. The cost of being vague is hours of rework.
5. Questions reveal fuzzy thinking.
If you can't articulate exactly what you need, you probably haven't thought it through yet. Having to write a clear prompt forces you to clarify your own thinking first.
What Still Surprises Me
A month into this experiment, I'm still discovering places where my communication is vaguer than I realized.
Yesterday I asked my manager for "feedback on the proposal." He asked: "Feedback on the approach, the budget, the timeline, or all of it?"
Old me would have said "all of it" and waited for whatever he sent back.
New me said: "Primarily the approach—does this strategy align with what leadership is expecting? Budget is locked, timeline is flexible if you see issues with the phasing."
We had a five-minute conversation that answered my actual question. The old way would have been a week of email back-and-forth.
That's the thing: Once you start paying attention to this, you can't unsee it. You notice every time you're being vague. Every time you're assuming instead of clarifying. Every time you're making someone else guess what you mean.
It's uncomfortable at first. It feels slower to be this explicit. It feels almost pedantic to spell everything out.
But then you realize: You're not slowing down. You're avoiding the much bigger slowdown of miscommunication, rework, and wasted effort.
For Anyone Who Thinks They're "Already Clear"
I thought I was a clear communicator. I've been managing projects and teams for 20 years. I've run workshops on knowledge management. I have Six Sigma Black Belt certification, which is basically a credential in process clarity.
And I was still being vague in ways I didn't recognize until AI forced me to stop.
So if you're thinking, "This doesn't apply to me—I already communicate clearly," I'd invite you to try this experiment:
Take the last email you sent that required action from someone else. Rewrite it as if you were sending it to an AI that will take everything literally and can't make any assumptions about context.
Did you have to add anything? Clarify anything? Specify anything you'd left implicit?
That's probably information the human needed too. They just did the work of filling in the gaps for you.
Where This Goes Next
I'm continuing this experiment, and it's evolving in interesting ways.
This week, I'm testing specific "prompt templates" for five common business scenarios:
Requesting information from a team member
Delegating a project task
Asking for feedback on a proposal
Running a decision-making meeting
Writing a status update
Some of these will work great. Some will probably fail spectacularly. That's part of learning.
But here's what I know so far: The thing I thought would make me obsolete—AI—actually made me better at the most human skill there is: communicating clearly with other people.
Next week, I'll share the actual prompts and templates that worked. The ones I'm now using for both AI and humans.
Because apparently, good communication is good communication, regardless of who—or what—you're communicating with.
Coming next: The 5 prompt templates that saved me 10 hours this week—and how you can use them even if you never touch AI.
My First Month: What I got wrong about AI
A brutally honest look at my assumptions, my mistakes, and what I wish someone had told me before I started.
A brutally honest look at my assumptions, my mistakes, and what I wish someone had told me before I started.
"Have you tried AI?"
My manager asked me this for the third time in two weeks. Each time, I nodded and said I'd look into it. Each time, I opened my laptop, stared at my screen, and had absolutely no idea where to start.
Actually, that's not quite true. I knew where to start. I just didn't want to.
Because here's what I really thought: using AI felt like cheating.
I'm a Six Sigma Black Belt and KCS-certified Knowledge Management Program Manager with over 20 years in business operations. I was proud of spending hours researching, analyzing, and writing everything myself. That's what showed my value. That's what my expertise meant.
If I handed my work over to AI, what was I even contributing?
My manager kept bringing it up anyway. He's a big proponent of AI, and he could see I was drowning in a particular project. His suggestions were gentle but persistent—not pushy, just... there. A reminder that there might be tools that could help.
I'd see emails about AI land in my inbox with subject lines like "10 Ways AI Will Transform Your Workflow," but I never opened them. They felt like one more thing to add to an already overwhelming to-do list. More importantly, they felt like an invitation to cheat.
But the project wasn't getting easier. And my manager kept checking in.
That's where everything went sideways.
The Breaking Point: When Exhaustion Wins
Here's what I was working with: five or six Google Docs scattered between my personal Drive and a shared drive. Some I'd created myself. Others had been developed by the team. All of them contained information about a new organizational structure we were implementing.
My task was to analyze these documents and find where roles and responsibilities overlapped, where there were gaps, and how everything fit together. The goal was strategic—I needed to make a case to leadership about how different parts of the organization could work together more efficiently.
It was exactly the kind of high-level analysis I'd always done manually. The kind that required my expertise, my judgment, my 20 years of experience.
My manager had suggested I try GitHub Copilot weeks ago. "It's what we're approved to use here," he'd said. I'd signed up for it—the company-sanctioned option—but I hadn't actually used it.
Because it still felt like cheating.
Instead, I opened all six documents. I started making notes. I created a comparison spreadsheet. I highlighted overlapping sections.
Three hours in, my eyes were crossing. I was terrified of missing something important in my fatigue. The deadline was looming. And I was exhausted.
That's when I finally gave in.
Pure desperation. That's what it took.
Mistake #1: Not Knowing How to Ask the Question
GitHub Copilot has chat functionality where you can talk to different AI models. I selected Claude—I'd heard it was good with documents. I copied text from my first Google Doc, pasted it in, and stared at the screen. The AI was right there, ready to help. I just needed to... ask it something?
I typed: "What does this say?"
I got back a response. Something generic about the document containing organizational information. Technically accurate, but completely unhelpful.
I tried again: "Summarize this."
This time I got a wall of text that was, again, technically accurate but told me nothing I didn't already know from reading the document myself. It regurgitated the content back to me in different words.
I felt profoundly stupid. I couldn't even get it to give me useful insights. I closed my laptop and went to make coffee.
The problem wasn't the tool. The problem was me. I was treating AI like a search engine, throwing vague requests at it and expecting it to read my mind about what I actually needed.
Mistake #2: Not Recognizing When AI Was Making Things Up
After a few more frustrating attempts, I stepped back and tried to articulate my actual task:
I need to analyze these documents and identify where roles and responsibilities overlap, where there are gaps in coverage, and how these different pieces fit together strategically.
Once I wrote that down, I realized I could just... tell the AI that.
So I uploaded another document and wrote:
"This document outlines part of our new organizational structure. I need to identify areas where the responsibilities described here overlap with or complement roles described in other documents. Can you highlight key responsibilities and note any areas where things seem unclear or might create redundancy?"
Finally, I got what felt like a useful response. The AI identified several areas where responsibilities seemed to overlap across documents—things like "process documentation," "quality review," and "cross-team coordination."
I copied the insights into my notes and moved on to the next document. Progress! This was exactly the kind of pattern identification I needed.
Except... something felt off.
As I went through the AI's analysis of the third document, I noticed it making claims about a "tiered escalation process" that seemed oddly specific. I went back to the original document. That process wasn't there. Not exactly, anyway. The document mentioned escalation pathways, but the AI had described a three-tier system with specific timeframes that didn't exist in my source material.
This is where my experience with Knowledge Centered Service (KCS) saved me. In KCS, accuracy is everything. You learn to verify information, check sources, and catch when something sounds plausible but isn't quite right. That attention to precision—that habit of asking "wait, did the document actually say that?"—kicked in.
I went back through the previous analyses. Sure enough, the AI had been... creative. It hadn't made up complete fabrications, but it had filled in gaps with reasonable-sounding details that weren't actually in my documents. Some of the "overlaps" it had identified weren't as clear-cut as it made them sound. It had inferred connections, assumed similarities, and extrapolated points that I had never written.
In the AI world, this is called "hallucination." The AI generates content that sounds confident and coherent but isn't based on the source material. And if I hadn't been trained to verify information, I might have presented those weak connections to leadership as if they were solid—completely undermining my credibility and the case I was trying to build.
This was my wake-up call: AI doesn't just need good prompts. It needs a human who knows enough to catch when it's wrong.
The Turning Point: Learning to Verify
Once I understood that AI could hallucinate, my entire approach changed. I stopped taking its responses at face value and started treating them as a starting point—something to verify, not something to trust blindly.
I developed a process:
Ask the AI a specific question about a document
Read the response critically
Go back to the source document and verify every claim
Note what was accurate and what was inference or fabrication
Refine my prompt to be more explicit about sticking to source material
I started adding phrases like "based only on what's explicitly stated in the document" and "do not infer or extrapolate beyond what's written." The responses became more conservative and, paradoxically, more useful. The AI stopped trying to be helpful by filling in gaps and started being helpful by highlighting what was actually there—and what wasn't there.
When I asked it to compare two documents, I'd follow up with: "Are there any differences you noted that require you to infer meaning rather than compare explicit statements?" This forced the AI to distinguish between what it could prove and what it was guessing.
It was more work than I'd expected. The AI didn't magically do my job for me. But it did help me process information faster once I learned to use it as an analytical partner rather than an oracle.
The Revelation: It Wasn't Cheating After All
Here's what changed my entire perspective:
The AI gave me the initial analysis in minutes instead of hours. But I still had to:
Verify every single point against the source documents
Apply my 20+ years of expertise to validate the logic
Make all the strategic decisions about what mattered most
Use my judgment about what to present to leadership
The AI didn't do MY work. It did the tedious comparison work so I could focus on the strategic thinking that actually requires my expertise.
The work I'd been spending three hours on—reading, highlighting, copying text into a spreadsheet—that didn't need 20 years of business operations experience. A junior analyst could do that. Or an AI.
But the synthesis? The strategic judgment? The ability to catch when something sounds plausible but isn't quite right? That's where my value lives.
I'd been spending my most valuable hours on the least valuable tasks.
It wasn't cheating. It was working at the level I'm actually paid to work at.
What I Learned: The Real AI Basics
After a month of stumbling through this, here's what I wish someone had told me from the beginning:
1. Specificity is everything. "Summarize this" gets you a summary. "Based on what's explicitly stated in this document, identify the key roles and responsibilities, and highlight any areas where responsibilities are ambiguous or undefined" gets you actual insights. The more specific your question, the more useful the answer.
2. AI will confidently make things up. This is not a bug. It's not a sign you're using the wrong tool. It's how AI works. It generates plausible-sounding text based on patterns, and sometimes those patterns lead it to fill in gaps with information that isn't there. You need domain expertise to catch this.
3. Your expertise matters more than ever. I thought AI would replace my need to understand the content deeply. Actually, it's the opposite. My KCS background—my training in verifying information and maintaining accuracy—was the only reason I caught the hallucinations. Without domain knowledge, I would have passed along fabricated details as facts.
4. Always verify against source material. Every claim the AI makes should be traceable back to your documents. If you can't find it in the source, assume the AI inferred it. This doesn't mean the inference is wrong, but it means you need to decide if it's reasonable, not the AI.
5. Prompt for honesty, not helpfulness. Add phrases like "based only on the source material" or "if this information isn't in the document, say so" to your prompts. The AI wants to be helpful, which sometimes means it tries to give you complete answers even when the complete information isn't available.
6. Context is your friend. Instead of asking one giant question about all your documents at once, break it down. Analyze one document thoroughly, verify the insights, then move to the next. Build up your understanding piece by piece rather than trying to process everything at once.
7. You're allowed to iterate. My first prompts were terrible. My tenth prompts were better. My twentieth prompts actually got me useful information that I could verify. That's normal. You're not supposed to be an AI expert on day one.
What Still Feels Messy
A month in, I'm not an AI wizard. I still write prompts that don't quite work. I still sometimes use the wrong tool or forget to give enough context. I still have moments where I think, "This would be faster if I just did it myself."
But here's what's changed: I'm not paralyzed anymore. When my manager suggests trying AI for something, I don't freeze. I think about what I'm trying to accomplish, pick a tool that might work, and try something. If it doesn't work, I adjust and try again.
The barrier to entry isn't technical skill—it's permission to be bad at something for a while. It's the willingness to type a clumsy prompt, get a weird answer, and try again instead of giving up.
The Advice I'd Give My Month-Ago Self
If I could go back and talk to the version of me who was frantically trying to figure out how to use GitHub Copilot at 11 PM, here's what I'd say:
Start with whatever tool you have access to. If your company has approved AI tools, use those. Don't worry about whether you have the "best" AI. Focus on learning how to work with AI in general.
Your first ten prompts will be vague. That's fine. Everyone's are. The only way to get better is to try, see what happens, and adjust. Write down what you're trying to accomplish in plain English before you even start prompting.
Assume the AI is guessing. Not always, but sometimes. Your job is to figure out when. If it says something that sounds oddly specific or complete, go back to your source and verify. If you can't find it, the AI probably inferred it.
Your expertise is your superpower. Whatever you know deeply—whether it's KCS, finance, HR processes, legal compliance, anything—that knowledge is what lets you catch AI mistakes. Don't think of AI as replacing your expertise. Think of it as a tool that only works well because you have expertise.
Don't expect magic. AI won't read your documents and automatically solve all your problems. It's a tool for processing information faster, not a replacement for your thinking. But it's a really useful tool once you learn how to verify its outputs.
You're not behind. Everyone using AI went through this same awkward learning phase. They just don't talk about it. The difference between them and you isn't that they're smarter—it's just that they've already gotten through their month of feeling like they don't know what they're doing.
Where I Am Now
I still don't use AI for everything. I still do plenty of work the old-fashioned way—reading documents myself, thinking through problems without algorithmic assistance, having actual conversations with actual humans.
But when I have multiple documents I need to cross-reference? I use AI to help me identify patterns and potential overlaps—then I verify every single claim against the source material. When I'm trying to spot gaps in a process? I ask AI to analyze the structure—then I use my own judgment to determine if the gaps it identified are real or inferred.
When I need to quickly understand a new policy document? I ask the AI to break down the main points in plain language—then I read the actual document to make sure the summary is accurate.
The AI has become a tool in my toolkit for processing information faster. But my expertise, my critical thinking, my ability to verify—those are what make the tool useful. Without them, I'd just be copying and pasting hallucinations.
And the best part? I'm not scared of it anymore. I'm not intimidated by prompts. I understand that the AI is a language model that sometimes makes things up, and I know how to catch it when it does.
That might not sound like a huge accomplishment, but for someone who spent weeks avoiding AI because I didn't know where to start, it feels like progress.
If you're in that place right now—reading articles about AI, feeling like you should be using it, but having no idea how to start—I want you to know: you don't have to be an expert. You just have to be willing to learn two things: how to ask good questions, and how to verify the answers.
Your expertise in your field isn't obsolete because of AI. It's more valuable than ever. It's what lets you know when the AI is right and when it's confidently making things up.
Pick a tool. Ask a specific question. Verify the answer against your source material. Adjust your approach based on what you learn. It probably won't work perfectly the first time. That's okay.
You're not behind. You're exactly where everyone started.
And here's something I didn't expect: learning to ask AI better questions taught me something completely unexpected about how I'd been communicating with actual humans for the past 20 years. But that's a story for next week.
Coming next: How learning to "prompt" AI revealed a communication problem I didn't know I had—and accidentally fixed my emails, meetings, and delegation in the process.

