How AI Taught Me I'd Been Communicating Poorly for 20 Years
I thought learning to "prompt" AI was just about getting better answers. Instead, it revealed a communication problem I didn't know I had—and accidentally fixed my emails, meetings, and delegation in the process.
Last week, I wrote about my first month with AI—how I went from thinking it was "cheating" to realizing it could actually help me work at the strategic level I'm paid for.
What I didn't mention was what happened next.
After my initial breakthrough with document analysis, I got... enthusiastic. I started using AI for everything. Weekly reports. Email responses. Meeting prep. Project briefs.
And every single attempt was frustrating in a different way.
The AI would give me responses that were technically correct but completely missed the point. Or it would produce generic summaries when I needed strategic analysis. Or it would focus on the wrong aspects of a problem entirely.
I assumed the AI was just... not that smart.
Turns out, the problem was me.
My First Terrible Prompt
Here's what I typed when I wanted to consolidate those six organizational documents I mentioned last week:
"Consolidate these 6 documents and find the similarities."
Simple, right? Clear instruction. The AI should know what to do.
What I got back was a wall of text that basically said: "These documents all discuss organizational structure. They share common themes around roles, responsibilities, and processes. Here are some areas of overlap..."
Technically accurate. Completely useless.
I tried again with slight variations:
"Summarize the key points across these documents."
"What are the main themes here?"
"Compare these documents."
Every response was similarly generic. I was getting increasingly frustrated. The AI clearly couldn't understand what I actually needed.
The Moment Everything Changed
After about the fifth failed attempt, I stopped and asked myself: What am I actually trying to accomplish here?
Not "summarize documents." Not "find similarities." Those were tasks, not outcomes.
What I really needed was:
Identify which roles and responsibilities appear across multiple documents
Flag where different documents have conflicting requirements
Note gaps where something important seems to be missing
Organize everything by priority so I could see what needed immediate attention versus what could wait
Highlight anything that would require a leadership decision to resolve
Once I articulated that, I rewrote my prompt:
"Analyze these 6 policy documents for:
Core similarities in approach
Overlapping roles and responsibilities that could be consolidated
Conflicting requirements that need resolution
Gaps in coverage Organize by priority and flag items requiring leadership decision."
Suddenly, I got back something I could actually use. Strategic analysis. Specific call-outs. Actionable insights organized in a way that made sense.
The AI hadn't gotten smarter. I had gotten clearer.
The Uncomfortable Realization
But here's where it gets interesting—and uncomfortable.
A few days later, I was writing an email to my team about an upcoming project. I hit send, then went back to working on an AI prompt for a different task. As I was carefully articulating exactly what I needed from the AI, a thought struck me:
Why am I being more specific with the AI than I am with my actual team?
I opened that email I'd just sent. It said something like: "Can you pull together the Q3 data and send me a summary? Thanks."
My team member would have to guess:
Which Q3 data? (We track about fifteen different metrics)
What format for the summary? (Spreadsheet? Report? Bullet points?)
What's the deadline?
What am I planning to do with it? (This context would help them know what to prioritize)
How detailed should it be?
I'd been assuming they could read my mind. Just like I'd been assuming the AI could read my mind.
The difference was that the AI had forced me to stop making that assumption. It couldn't guess what I meant. It could only respond to what I actually said.
What Happened When I Applied This to Everything
I started an experiment: What if I communicated with humans using the same precision I'd learned to use with AI?
Emails: Instead of: "Can you review this and get back to me?" I wrote: "Can you review this proposal for technical accuracy and flag any budget concerns? I need your feedback by Friday COB so I can incorporate changes before the Monday stakeholder meeting."
Response time dropped significantly. Back-and-forth clarification emails nearly disappeared.
Project Briefs: Instead of a vague paragraph about objectives, I started structuring them like AI prompts:
Specific deliverable
Success criteria
Constraints and requirements
Context for why this matters
Decision points that need my input
My team started asking fewer clarifying questions. Projects moved faster.
Meeting Agendas: Instead of: "Discuss Q4 planning" I wrote:
10 min: Review current Q3 status (where we stand on timeline and budget)
20 min: Identify Q4 priorities (decision: which 3 initiatives get resources?)
15 min: Flag blockers (decision: what needs to be resolved this week?)
5 min: Assign action items
Meetings stayed on track. Decisions actually got made.
Delegation: Instead of: "Can you handle the client report?" I said: "Can you draft the client report using last quarter's template? Focus on the metrics they specifically asked about in the last meeting—response time and resolution rate. I'll need the draft by Thursday to review before Friday's call. Flag anything where the data looks unusual or incomplete."
Less back-and-forth. Better results. Less need to redo work.
The Business Impact
I started tracking this, because I'm a process nerd and that's what we do.
After two weeks of applying "prompt engineering" principles to all my communication:
Weekly reports: Down from 45 minutes to 12 minutes (I still review and refine AI-generated drafts, but the initial creation is much faster)
Email admin time: Cut by 60% (fewer clarification emails, faster responses from others)
Meeting prep: 30% more efficient (clearer agendas meant less time figuring out what we were actually trying to accomplish)
Delegation rework: Reduced by about 40% (clearer instructions upfront meant less need to redo work)
Total time gained for actual strategic thinking: About 8 hours per week.
Eight hours. That's a full workday I'd been losing to poor communication.
The Real Lesson: AI Didn't Fix My Communication
Here's what's wild about this: The AI didn't fix my communication. It just revealed how broken it was.
For 20 years, I'd been working with smart, capable people who somehow managed to translate my vague requests into actual results. They were doing the mental work of figuring out what I probably meant.
I thought I was being efficient by keeping things brief. "They're professionals," I told myself. "They don't need me to spell everything out."
But what I was actually doing was:
Pushing cognitive load onto other people
Creating opportunities for misalignment
Generating unnecessary back-and-forth
Wasting everyone's time
The AI couldn't compensate for my vagueness. It just reflected my unclear thinking back at me until I fixed it.
And once I fixed it for the AI, I realized I should probably fix it for the humans too.
The KCS Connection
The irony isn't lost on me.
I'm a KCS-certified Knowledge Management Program Manager. I literally teach people how to structure information so it's clear, findable, and useful. I've spent years helping organizations improve their knowledge systems.
And apparently, I'd never applied those principles to my own day-to-day communication.
In KCS, we talk about the importance of context, structure, and clarity. We emphasize that good knowledge isn't just accurate—it's usable. It anticipates what the audience needs to know and provides that information proactively.
I knew all of this intellectually. But I wasn't living it in my daily work.
The AI made me live it. Because the AI doesn't let you get away with assumptions.
What I Learned About Prompt Engineering (That Has Nothing to Do with AI)
After a few weeks of this, here's what I understand about "prompt engineering"—for AI or humans:
1. Specificity is kindness.
Being vague doesn't save time. It shifts the burden of figuring out what you mean onto someone else. Being specific upfront is more efficient for everyone.
2. Context changes everything.
"Review this document" produces different results than "Review this document for technical accuracy before the Monday stakeholder meeting." The context tells people what to focus on and how to prioritize their effort.
3. Structure reduces cognitive load.
Numbered lists. Clear sections. Explicit questions. These aren't just formatting choices—they're ways to make information easier to process and act on.
4. Assumptions are expensive.
Every time you assume someone knows what you mean, you risk misalignment. The cost of being explicit is five extra seconds. The cost of being vague is hours of rework.
5. Questions reveal fuzzy thinking.
If you can't articulate exactly what you need, you probably haven't thought it through yet. Having to write a clear prompt forces you to clarify your own thinking first.
What Still Surprises Me
A month into this experiment, I'm still discovering places where my communication is vaguer than I realized.
Yesterday I asked my manager for "feedback on the proposal." He asked: "Feedback on the approach, the budget, the timeline, or all of it?"
Old me would have said "all of it" and waited for whatever he sent back.
New me said: "Primarily the approach—does this strategy align with what leadership is expecting? Budget is locked, timeline is flexible if you see issues with the phasing."
We had a five-minute conversation that answered my actual question. The old way would have been a week of email back-and-forth.
That's the thing: Once you start paying attention to this, you can't unsee it. You notice every time you're being vague. Every time you're assuming instead of clarifying. Every time you're making someone else guess what you mean.
It's uncomfortable at first. It feels slower to be this explicit. It feels almost pedantic to spell everything out.
But then you realize: You're not slowing down. You're avoiding the much bigger slowdown of miscommunication, rework, and wasted effort.
For Anyone Who Thinks They're "Already Clear"
I thought I was a clear communicator. I've been managing projects and teams for 20 years. I've run workshops on knowledge management. I have Six Sigma Black Belt certification, which is basically a credential in process clarity.
And I was still being vague in ways I didn't recognize until AI forced me to stop.
So if you're thinking, "This doesn't apply to me—I already communicate clearly," I'd invite you to try this experiment:
Take the last email you sent that required action from someone else. Rewrite it as if you were sending it to an AI that will take everything literally and can't make any assumptions about context.
Did you have to add anything? Clarify anything? Specify anything you'd left implicit?
That's probably information the human needed too. They just did the work of filling in the gaps for you.
Where This Goes Next
I'm continuing this experiment, and it's evolving in interesting ways.
This week, I'm testing specific "prompt templates" for five common business scenarios:
Requesting information from a team member
Delegating a project task
Asking for feedback on a proposal
Running a decision-making meeting
Writing a status update
Some of these will work great. Some will probably fail spectacularly. That's part of learning.
But here's what I know so far: The thing I thought would make me obsolete—AI—actually made me better at the most human skill there is: communicating clearly with other people.
Next week, I'll share the actual prompts and templates that worked. The ones I'm now using for both AI and humans.
Because apparently, good communication is good communication, regardless of who—or what—you're communicating with.
Coming next: The 5 prompt templates that saved me 10 hours this week—and how you can use them even if you never touch AI.

