A course from Jonathan Soma
Learn to design, test, and improve newsroom AI, whether you build the tools yourself or guide the teams who do.
Questions? Email jonathan.soma@gmail.com
Journalists and editors everywhere are experimenting with AI โ but few understand how to evaluate what they build. The Automated Newsroom is a six-week, practical introduction to designing and testing AI workflows that serve real newsroom needs.
Each week you'll explore a different part of the automation lifecycle: building small workflows, connecting components, tracking their performance, and learning how to evaluate success or failure. You'll see how these ideas apply to real newsroom tasks like summarizing interviews, monitoring meetings, or creating multilingual versions of your reporting.
This first pioneer cohort is an experiment in itself: we'll adapt live to participant interests and emerging newsroom tools. You'll join a small group of journalists, data reporters, and newsroom technologists who want to understand how AI really fits into editorial work โ not just theoretically, but operationally.
By the end, you'll have a functioning automation (or two), an understanding of how to evaluate its performance, and a playbook for iterating on AI tools responsibly inside your organization.
Yes, we're going to show you how to put together AI workflows and pipelines using a handful of tools, but that isn't the important part.
The thing to pay attention to is that we're going to show you how to evaluate your pipelines, which lets you iterate and improve your product. This is how you actually take something from a vaguely useful prototype to a legitimate product.
We spend three weeks building and evaluating a pipeline together (a reader tip line!), then one week on product thinking, then finish up on the "advanced stuff" like custom interfaces and how to integrate things with your CMS (we're going to cheat on that one and use Tampermonkey or Google Apps Script).
It's going to be great. It'll be like my AI for investigative journalism course, but more directed, applied, and product oriented. So maybe not very much like it at all, really?
This course is designed for people across the newsroom who want to make AI practical:
Exploring how to safely automate parts of their workflow
Experimenting with structured AI pipelines
Overseeing newsroom AI projects
Responsible for strategy, ethics, and evaluation
Anyone wanting to understand AI beyond buzzwords โ no coding required
You don't need to be technical to benefit, but you'll come away understanding how the technical pieces fit together.
By the end of the course, you'll be able to:
Build and test a simple newsroom automation, from idea to prototype
Diagnose where and why AI systems fail โ and how to fix them
Use tracing and evaluation to see inside your automations
Measure AI quality and reliability, not just "oh this seems good, maybe?"
Document and communicate findings to colleagues and managers
Apply product thinking to decide when AI should help, and when it shouldn't
You'll leave with reusable templates and a repeatable process for iterating on AI workflows long after the course ends.
Use the same tools active newsrooms are experimenting with โ from spreadsheet-based AI to workflow builders like ActivePieces or n8n
Every lesson includes both no-code and code-friendly examples
Learn how to measure whether your automation actually works, not just that it runs
All examples come from editorial, investigative, or production contexts โ not marketing or "AI productivity." If you're interested in that, just go browse the n8n workflows.
Data journalism professor at Columbia University, Soma has been teaching newsroom AI for over a decade. You can find some of his projects at Practical AI for Investigative Journalism and investigate.ai.
You'll gain hands-on experience with the tools and frameworks that power modern newsroom automation:
Visual workflow builders like ActivePieces, n8n, and Make.com that let you chain AI calls, APIs, and data sources without writing code.
Libraries like smolagents or Pydantic AI for building multi-step AI workflows with memory, tools, and decision-making capabilities.
Tools Opik or Arize Phoenix that show you exactly what happens inside your AI pipeline โ which prompts ran, what they returned, and where things broke.
Techniques like LLM-as-Judge, human labeling, and automated scoring to measure accuracy, consistency, and reliability of AI outputs.
See how to run LLMs directly in Google Sheets for quick experiments, batch processing, and collaborative prompt testing.
Versioning, testing, and iterating on prompts systematically.
Creating realistic test cases and edge cases to stress-test your automation before it touches real newsroom content.
Defining success metrics, understanding user needs, and making design decisions about when to automate and when to keep humans in the loop.
All concepts are taught with both technical and non-technical paths โ choose your own depth!
Start small with a one-step automation โ a tip classifier, a tone checker, a summarizer. Run real examples, discover where it fails, and learn basic evaluation through labeling and manual analysis.
Turn single steps into pipelines. Automate multi-stage tasks (classification โ extraction โ summarization) and see how small errors compound. Learn to create and use synthetic test data to surface weaknesses.
Add visibility to your systems using tools like Opik or Langfuse. Explore traces and spans, version prompts, and learn to spot where logic collapses.
Who is your automation for? What does success look like? You'll practice defining metrics, aligning with user needs, and documenting design trade-offs. And since you're reading so closely, how about a 10% off coupon?
Sometimes the question is just what's possible? We'll survey the AI tools available to us, along with how to use tools like Tampermonkey to inject custom workflows into your CMS without actually touching its serious back-end code (or maybe some Google Docs stuff!).
Turn evaluation into its own workflow. Build lightweight dashboards (Lovable, Sheets, or custom setups) to score, measure reliability and track changes over time.
Weekly sessions on Thursdays (ish) from 10am-12pm ET
All sessions are recorded and available within 24 hours
Professor, Columbia University Graduate School of Journalism
Jonathan Soma directs Columbia's Data Journalism M.S. and the Lede Program. He has taught automation and investigative data skills across the world โ from Tokyo to Rio โ and created Practical AI for Investigative Journalism (among a million other things).
His teaching emphasizes practical, transparent uses of AI that empower journalists, not replace them.
$1,100 USD with automatic pricing adjustments based on your geography. I bet you can find a coupon code if you look hard enough.
100% refund available before Session 2, no questions asked.
All sessions recorded and shared within 24 hours.
Open enrollment, no need for an application or anything like that.
You'll have access to a private Slack/Discord space organized by topic and project type. Weekly office hours are available for help and discussion.
No grades or certificates: just working automations, shared failures, and collective learning. Optional group showcase one month after the course ends.
No. Every lesson includes both no-code and code-optional paths.
Not at all. You can choose the technical depth that fits your goals.
Right now it looks like Claude for spreadsheet integrations, ActivePieces or n8n for no-code workflows, smolagents or Pydantic AI for code workflows, and Opik, Arize Phoenix and/or Lovable for evaluation and prompt management. Feature changes, region locks and various other gotchas gotchas sometimes prevent us from using specific tools, so they're all subject to change!
Whatever you want! Even though we're focused on hands-on experience, the concepts transfer should transfer to whatever tooling your newroom uses..
All sessions are recorded and available online.
You'll receive an invoice suitable for reimbursement.
No, it's new! This is the pioneer cohort, so expect active iteration and direct feedback.
You'll have templates, how-to's, and skills to continue improving automations โ plus a steep discount for the next advanced course (that should be in Janurary or February).
Learn how to build, test, and improve newsroom automations, and finally understand what makes them work.
Starts Nov 2025 ยท Full refund before Session 2
Have questions about the course? Email me at jonathan.soma@gmail.com