top of page

AI and the Shape of Work

  • Peter Meyers
  • 2 hours ago
  • 3 min read


Most organizations are now experimenting with AI in some form. The real question is where it fits in the work people do every day.


AI changes where the bottleneck in knowledge work sits.


Many organizations begin by layering AI onto existing workflows. Teams experiment with prompts, generate drafts, summarize reports, and test where the tools might help.


That experimentation is useful. It reveals both the potential and the limits of the technology.


But in many places the structure of the work itself stays the same. AI is inserted into existing processes. The result is familiar: uneven results, inconsistent trust in the output, and a sense that the technology is helpful in places but not yet reliable as part of the work.


What is often missed is that AI does not simply make existing work faster (and sometimes it truly doesn’t).


The shift becomes visible in how work actually gets done.


For decades, much of professional work involved producing information. Teams spent time gathering material, synthesizing research, drafting documents, and preparing analysis before decisions could be made. Those early stages required real effort. They were slow, and that slowness shaped how work was organized.


AI collapses much of that effort. Information that previously required careful compilation can be organized in minutes.


But speed does not guarantee quality. AI can produce convincing material quickly, even when the underlying assumptions are weak, incomplete, or simply wrong.


When those early stages collapse, the structure of the work begins to shift.


The bottleneck moves from producing information to evaluating it.


Instead of effort being concentrated in gathering and producing information, it shifts toward framing questions, establishing context, evaluating outputs, and determining whether the material being produced is reliable enough to support decisions.


The work does not disappear. It relocates.


Another shift becomes clear once teams begin using AI regularly.


The usefulness of AI is less about the prompt itself and more about how well the work is framed before the system is asked to contribute. AI can help explore an idea that is not fully formed yet. It can outline unfamiliar topics, surface possible approaches, and help organize early thinking. In that sense it can be useful even when someone is still figuring out what they are dealing with.


But the technology becomes far more effective when the problem itself is clearly defined. Context, constraints, and intent shape the output far more than clever phrasing ever will.


In practice, AI behaves less like an expert system and more like a capable junior contributor. It can produce work quickly, but it still requires guidance, iteration, and oversight.


The difference between shallow output and useful output often comes down to how well the work itself has been thought through before the system is asked to participate.


This is where the shift begins to show up inside organizations.


Workflows were designed for a slower environment where preparation itself created natural pacing. When preparation becomes faster, the downstream parts of the system including review, interpretation, and decision-making now carry more pressure.


Teams can generate more material than they are accustomed to evaluating.


This is why early AI adoption often feels uneven. The technology is capable, but the surrounding practices have not adjusted. Processes designed for a different information environment remain in place even as the speed of preparation changes.


The result is an imbalance. Work can begin faster than organizations are prepared to absorb it.

Over time organizations begin adjusting the work itself. Not by abandoning existing practices, but by clarifying where AI belongs in the workflow and where human evaluation must carry the weight.

Some tasks become easier. Others become more important.


Preparation becomes cheaper. Framing and judgment become more valuable. The human stuff.


Organizations that adjust well to this shift are not simply inserting AI into existing tasks. They are paying attention to how the technology shifts the structure of the work itself.


Once the early stages of knowledge work become inexpensive, the challenge shifts from producing information to evaluating it well.


It is deciding what to trust, what matters, and what to do next.


When that shift happens, the conversation about AI changes.


The technology stops being the story.


The work becomes the story again.


 

Comments


bottom of page