Amazon managers are increasingly relying on internal AI usage statistics to track adoption of the company’s generative AI tools, creating growing pressure inside some engineering teams to demonstrate visible engagement with the technology, according to current employees.
Several staff members said the pressure has started shaping behavior in ways that go beyond productivity itself, with some employees using internal automation systems to generate additional AI activity that may have little practical value.
The shift follows Amazon’s broader push to expand internal AI adoption across the company. Employees said managers have been encouraged to monitor participation levels as the group spends heavily on generative AI infrastructure, tooling and workplace integration.
One internal product that has drawn attention in recent weeks is “MeshClaw”, a company-built AI agent platform that allows employees to automate tasks across workplace software, according to people familiar with the matter.
The tool can connect with internal systems and applications including Slack, email workflows and coding environments, allowing users to delegate routine actions to AI agents operating on their behalf.
Several employees said the system had become associated internally with efforts to increase AI token usage statistics — measurements tied to how frequently employees interact with AI models.
“There is just so much pressure to use these tools,” one Amazon employee told the Financial Times. “Some people are just using MeshClaw to maximize their token usage.”
Another employee said some staff had started automating low-priority or unnecessary processes partly to increase their visible interaction with internal AI systems.
“When they track usage it creates perverse incentives and some people are very competitive about it,” another current employee told the FT.
Amazon has told employees that AI token usage is not part of formal performance evaluations, according to people familiar with the matter. Some employees said, however, that many workers still believe managers pay close attention to participation levels.
The company has also adjusted visibility around some internal usage metrics in recent months, according to a person familiar with the matter. Team-wide access to certain statistics was reportedly reduced so that individual usage information became more limited to employees and managers themselves.
Employees said the visibility of the numbers alone had altered behavior inside some teams, particularly among workers concerned about how AI adoption was being perceived internally.
Amazon is expected to spend roughly $200bn on capital expenditure this year, with most of that investment linked to AI infrastructure and data centers.
Inside some large technology companies, employees said adoption metrics were becoming increasingly difficult to separate from broader discussions around performance, productivity and internal expectations.
Some employees described a growing disconnect between official company messaging around experimentation and the informal competitive culture emerging around visible AI usage.
Several employees said staff remained highly aware that AI usage statistics were visible internally even if managers did not formally treat them as performance indicators.
Meta employees have reportedly engaged in similar behavior internally, according to people familiar with discussions around AI adoption metrics inside large technology groups.
The use of internal rankings and participation statistics has spread quietly across parts of the industry as companies attempt to gauge whether employees are actively incorporating generative AI systems into everyday workflows.
MeshClaw itself was developed internally by dozens of Amazon employees, according to documents seen by the Financial Times. The system was partly inspired by “OpenClaw”, an open-source AI agent framework that gained popularity earlier this year among developers experimenting with autonomous workplace automation.
Amazon said in a statement that the tool was helping employees automate repetitive tasks and experiment with generative AI systems internally.
The company added that it remained committed to the “safe, secure and responsible development and deployment” of AI technologies.
Some employees said concerns around the software extended beyond workplace metrics and into broader questions around operational oversight.
The system can execute actions across connected workplace environments, according to employees familiar with the platform, raising concerns among some staff about how much autonomy should be granted to AI agents operating inside internal systems.
One employee described internal debate around how aggressively companies should automate workplace processes before governance controls were fully established.
Multiple employees said some staff remained uneasy about the level of autonomy being granted to AI systems operating across workplace software and internal company environments.


