Category: Accounting & Tax

AI Is Not the Problem. Bad Workflows Are: What Actually Breaks When Firms Try to Use AI 

     
    Toc

    Table of contents Toc Icon

      Many accounting firms say they tried AI and walked away disappointed. The tools felt unreliable, the outputs were inconsistent, and trust broke down fast. But in most cases, the problem was never the AI itself. It was the workflow underneath it. 

      Jan Haugo is the founder of SmartAccountant ai

      To understand why AI succeeds in some firms yet fails in others, Ace Cloud Hosting spoke with Jan Haugo, founder of SmartAccountant.ai and a leading voice in practical AI implementation for accounting professionals.

      Jan builds deployable AI workflows designed to withstand real-world conditions, pressure-tests tools in live environments, and documents what actually works. 

      With more than 20 years of hands-on accounting experience and as a co-author of Intuit’s official AI Certification curriculum, Jan helps firms move from experimentation to execution.  

      In this conversation, she explains why broken workflows sabotage AI, where firms force AI into the wrong places, and how to build guardrails that protect accuracy, data privacy, and professional judgment. 

      When accounting firms say they “tried AI” and it didn’t work, what usually went wrong first: the tool, the data, or the workflow around it? 

      The workflow, every single time. 

      Here’s what I see repeatedly: A firm hears about AI, picks ChatGPT or Claude, throws their messiest problem at it, usually something like bank reconciliations or GL review, and gets garbage results. They blame the AI. But the real issue? They’re trying to automate a process that was never clearly defined in the first place. 

      If your current manual workflow is chaotic, AI doesn’t fix it; it just makes the chaos faster and more expensive. 

      I worked with a six-person CAS firm that tried using ChatGPT for anomaly detection in client general ledgers. It failed spectacularly. Why? Because they never defined what “normal” looked like for each client. They hadn’t documented their decision trees: Which account variances matter? What’s the threshold for investigation? When do you override the pattern? 

      AI can’t read your mind. If you can’t explain your process to a new staff member in writing, you can’t explain it to AI either. 

      The firms that succeed with AI do something boring first: they map their existing workflow, identify the repetitive pattern-recognition steps, and then introduce AI as a tool within that documented process. The ones who fail treat AI like a magic wand they can wave at operational dysfunction. 

      Bottom line: Fix your workflows first, then add AI. Not the other way around. 

      Also Read: Getting Started With AI: A Practical Guide for Accounting Firms

      What are the most common workflow problems you see that make AI outputs unreliable or hard to trust inside a firm? 

      Three problems show up in almost every firm struggling with AI reliability: 

      First: No verification checkpoints. Firms generate AI outputs but have no systematic review protocol. Who checks the work? When? Using what criteria? I teach what I call the “15-Minute Rule”. If AI does in 2 minutes what used to take 2 hours, you should invest at least 15 minutes in structured verification. Most firms skip this entirely, then wonder why errors slip through. 

      Second: Undefined decision trees. Your experienced staff members make hundreds of micro-decisions during any accounting task; they just don’t realize it because it’s become intuitive. AI doesn’t have intuition. When a GL shows an unusual spike in office supplies, your senior bookkeeper knows to check if it’s an annual insurance prepayment miscategorized. AI doesn’t know that unless you’ve explicitly told it what to look for and how to flag exceptions. 

      Third: Missing handoff protocols. AI-generated work has to integrate with human work, but most firms haven’t defined the handoff points. Does the AI output go straight to the client? To a reviewer first? Who’s responsible if something’s wrong? Without clear ownership and handoff procedures, AI outputs just create confusion and finger-pointing when issues arise. 

      The pattern: These aren’t AI problems. They’re operational maturity problems that AI exposes ruthlessly. 

      Where does AI fit best in an accounting firm today, and where do you think firms are forcing it into the wrong places? 

      AI thrives in pattern recognition and formatting hell. It breaks in high-stakes judgment calls. Best fits I’m seeing right now: 

      • Reconciliation preparation: AI can categorize transactions, flag anomalies, and prepare 80% of the grunt work so your staff focuses on the 20% that requires judgment 
      • 1099 readiness audits: Perfect for pattern-matching tasks like identifying missing W-9s, flagging vendor duplicates, and catching edge cases in contractor classifications 
      • Client communication polish: Taking your rough draft emails and making them professional without changing your meaning saves hours weekly 
      • SOP documentation: AI excels at taking messy process notes and structuring them into clear, trainable procedures 

      Where it’s being forced (and failing): 

      • Final tax strategy decisions: AI can research and summarize options, but nuanced client-specific tax planning still requires professional judgment 
      • Client relationship management: AI can’t read emotional cues, gauge client sophistication, or navigate sensitive financial conversations 
      • Ethical judgment calls: When you’re deciding whether to take on a client or how to handle a questionable transaction, that’s human territory 

      Here’s my framework: AI is your research analyst and your formatting assistant, not your replacement CPA. It should amplify your expertise, not make decisions that require licensure, experience, or relationship context. 

      The firms getting ROI are using AI for high-volume, low-stakes pattern work. The ones struggling are trying to hand it high-stakes, nuanced decisions. 

      What guardrails should firms put in place to protect accuracy, client data privacy, and consistency when AI becomes part of daily work? 

      Most firms do “security theater” slides in a staff meeting about not uploading sensitive data. That’s not enough. You need systematic safety protocols. 

      Here’s what actually works: 

      Data redaction protocol: Before anything touches AI, sanitize it. Replace client names with Client_001 and Client_002. Remove SSNs, EINs, bank account numbers, credit card data, and individual employee payroll details. Use summary-level financial statements and account totals, not transaction-level detail with PII. This should be a documented, repeatable procedure, not a “remember to be careful” reminder. 

      Platform selection criteria: Not all AI tools are equal for accounting work. I use ChatGPT for structured data tasks like categorization and reconciliation prep. Claude for nuanced analysis, long-document processing, and executive summaries. Perplexity for real-time tax law research. Each has different security models and strengths. Pick the right tool for the job and know what each platform does with your data. 

      The verification checkpoint system: Every AI output gets human review before it goes to a client or into a deliverable. Create a checklist: Does this make logical sense? Are the numbers internally consistent? Would I stake my professional reputation on this? If you can’t answer yes to all three, it goes back for refinement. 

      Firm-wide implementation: For practices with 10+ staff, these can’t be informal guidelines; they need to be installed as firm-wide protocols with accountability. Who’s responsible for training? Who audits compliance? What happens when someone violates the protocol? 

      The goal isn’t to eliminate risk entirely; it’s to be more rigorous with AI than you are with manual work. 

      If a firm wants to move from experimentation to real execution, what’s the simplest “first workflow” you’d recommend building and documenting? 

      Client email polish. Hands down. Why this wins as your first AI workflow: 

      • Low risk, high visibility: You’re not touching client data or financial reporting. You’re taking the email you already drafted and asking AI to make it clearer, more professional, or more concise. Easy to verify (you read it before sending), impossible to catastrophically mess up. 
      • Immediate time savings: If you send 15-20 client emails weekly and spend 5-10 minutes per email agonizing over tone and phrasing, that’s 2-3 hours saved right there. Multiply across your team and you’re looking at 5-10 hours weekly reclaimed. 
      • Builds AI confidence: Success creates momentum. Your team sees AI as a helpful assistant, not a threat. That psychological win matters when you introduce more complex workflows later. 

      Here’s the simple build (takes 15 minutes): 

      • Open ChatGPT or Claude 
      • Create a prompt: “You’re a professional accounting firm communicator. Take my rough draft emails and make them clear, professional, and client-appropriate. Maintain my meaning and key points. Never add information I didn’t provide.” 
      • Test it with 3-4 of your recent sent emails 
      • Refine the prompt based on what you liked/didn’t like 
      • Document the final prompt and save it where your team can access it 

      What’s next after this wins: Meeting notes organization, GL anomaly detection, or SOP documentation. But start here. Prove AI works on something simple before tackling the complex stuff. 

      This is the exact first exercise we run in our curriculum, 15 minutes to build, immediately useful, zero data risk. It’s the gateway workflow that transforms skeptics into builders. 

      From AI Experiments to Real Execution 

      What Jan makes clear is that AI does not fix disorder. It exposes it. Firms that struggle with AI are usually asking it to automate judgment before they have defined process, review, and ownership. The result is faster output at the cost of weaker trust. 

      Firms that see real returns take a different path. They document workflows first, install verification checkpoints, and treat AI like a junior assistant that prepares work, not one that finalizes it. They start small with low-risk wins, build confidence across the team, and expand only into more complex use cases. 

      AI becomes powerful when it is boring, repeatable, and supervised. When firms focus on execution rather than experimentation, AI stops being a gamble and becomes infrastructure. 

      At Ace Cloud Hosting, we see firms of all sizes embracing cloud technology and AI to free up time, elevate client conversations, and make smarter decisions. As workflows automate and real-time collaboration becomes the norm, accountants must evolve into technology-driven advisors.  

      About Julie Watson

      Julie Watson's profile picture

      Julie Watson loves helping businesses navigate their technology needs by breaking complex concepts into clear, practical solutions. With over 20 years of experience, her expertise spans cloud hosting, virtual desktop infrastructure (VDI), and accounting solutions, enabling organizations to work more efficiently and securely. A proud mother and New York University graduate, Julie balances her professional pursuits with weekends spent with her family or surfing the iconic waves of Oahu’s North Shore.

      Find Julie Watson on:

      Leave a Reply

      Your email address will not be published. Required fields are marked *

      Browse A Category
      Copy link