dark mode light mode Search
Search

Meta Tracking Employees’ Keystrokes and Screens to Train AI: What Workers Need to Know

Meta

Meta just told its employees something most companies only hint at: your daily work is now training data.

Reuters obtained internal memos that confirm the story. Meta is installing new tracking software on U.S. employee work computers. The tool records mouse movements, clicks, and keystrokes. It also takes periodic screenshots of whatever appears on screen. The goal, Meta says, is to help its AI models learn how humans use computers. It does this by watching real employees do their real jobs inside the apps they already use for work every day.

The programme is called the Model Capability Initiative, or MCI. It sits inside a broader internal effort. Meta CTO Andrew Bosworth described that effort this week as the Agent Transformation Accelerator. The goal is to build AI agents that can perform routine desk work on their own.

The timing is hard to ignore. Meta has committed up to 135 billion dollars in AI spending for 2026. Meanwhile, the company is preparing to cut as much as 20 percent of its workforce. The first layoffs reportedly begin in May. You do not have to draw the connection explicitly. It draws itself.

Unseen Surveillance

What Meta Is Actually Doing

The MCI tool runs across a designated list of work apps and websites. It does not, Meta insists, capture everything. It does not run across personal browsing or apps outside the approved list. The screenshots are periodic, not constant. They provide context for the keystroke and click data.

The internal memo framed the whole thing warmly. It told employees they could help train the company’s AI models by simply doing their normal jobs. No extra work is required. Just carry on as usual, and your behavior becomes the lesson.

Meta spokesperson Andy Stone confirmed the tool exists. He said the data will serve only for model training, not for employee performance reviews. The company also says it has safeguards to protect sensitive content. However, many employees who use work devices for personal banking or private conversations are understandably nervous. They are unsure what falls inside the capture window, even with those assurances.

Internal pushback has already started. Some employees have raised privacy concerns directly. Gizmodo reported that one employee called MCI a surveillance tool, adding there was no reason to pretend otherwise.

Why This Matters Beyond Meta

This story is about Meta today. However, it points at something every employer with a laptop fleet and an AI ambition will soon face.

For an AI agent to handle routine computer tasks, it needs to know how humans do those tasks. It needs to know which menus people click, how they navigate forms, and what shortcuts they use. However, that knowledge does not come from a textbook. It comes from watching people work.

Meta has chosen to watch its own employees. Other companies are going in different directions. For example, OpenAI asked third-party contractors earlier this year to upload real work products from previous jobs. Actual PowerPoints and spreadsheets became training material. The approaches differ. The goal is the same.

Wherever AI workplace tools go next, the raw material is human behavior at work. The key question is who decides how companies collect, store, use, and profit from that material. And it is whether the people doing the work have any say in any of it.

What This Looks Like at Different Stages of Your Career

If You Are Early in Your Career

A lot of entry-level work is what people in offices call “busy work.” Pulling data from one system. Formatting it in another. Processing invoices. Updating records. These tasks feel tedious. However, they are also where you learn how a business actually works. You pick up the logic behind its processes and the informal knowledge that only comes from doing the job by hand.

If AI agents absorb that work by learning from people currently doing it, two things happen at once. First, the repetitive tasks may disappear. But so might the entry point where many people first develop real workplace knowledge. Where do the next generation of managers come from if AI automates the early rungs of the ladder?

That is not a reason to panic. It is, however, a reason to pay close attention to what you are learning, not just what you are doing.

If You Are Mid-Career

Workplace monitoring is not new. If you have worked in a call centre or a large bank, you have almost certainly had your activity logged. App usage trackers, screen time reports, and productivity timers have been standard in many industries for years.

What is new here is the stated purpose. Previous monitoring tools watched productivity. However, this tool records behavior to teach AI how to do the same job. That is a meaningful difference. Your company is not checking whether you work fast enough. Instead, it is recording how you work so that a system can eventually do what you do.

Meta says the data will not be used for performance reviews. That promise means something. However, it is a policy commitment, not a legal guarantee. Moreover, data that exists today has a way of finding new uses when circumstances change.

If You Have Been in the Workforce for Decades

Experience in any profession is not just about knowing the rules. It is also about knowing which rules to bend and which shortcuts actually work. That knowledge takes years to build. It lives in the habits and instincts of someone who has done the same job for a long time.

When that behavior becomes training data, an important question follows. Who owns the value of that expertise once an AI has learned it? Is it the employee who built those habits over twenty years? Or is it the company that captured them in six months of screen recording?

There is no legal or industry standard yet that answers that question. That absence is itself significant.

The Hard Questions Nobody Is Asking Out Loud

Who owns the skills once an AI learns them? Employees are paid to do their jobs today. However, their daily patterns also help the company build AI agents that may automate parts of that same job tomorrow. The company captures both the labor and the learning. Workers currently receive no share of the upside from that AI investment.

Can a company promise not to use this data for performance reviews forever? Meta says it will not. However, this is a policy commitment today, not a legal one. Policies change with leadership. Additionally, once a dataset exists, the pressure to extract more value from it rarely disappears.

What counts as “work” on a work computer? The MCI tool runs only on approved work apps. But the line between work and personal life on a laptop is blurry for most people. A quick bank check. A message to a family member. A health search between tasks. Employees cannot always be certain what the tool captures. That uncertainty creates its own stress.

Is consent real when you cannot say no? Meta informed employees about the system. However, informed consent and genuine consent are different things. Being told about a policy is not the same as having a real choice. If the alternative is risking your job, consent becomes a formality.

Will this make work better or just more stressful? If AI agents handle tedious tasks, workers can focus on more meaningful work. That is the optimistic version, and it is not impossible. However, surveillance changes behavior. Consequently, people become more cautious and less willing to experiment. Research documents this pattern consistently across workplace settings.

Today This Is a Meta Story. Tomorrow It Might Not Be.

The MCI tool is currently rolling out across Meta’s U.S. offices. However, Meta is a global company. It has contractors, partners, and offices across multiple continents, including across Africa.

Moreover, when a technique works at Meta’s scale, other companies take note. The outsourcing sector in Nigeria, Ghana, Kenya, and South Africa employs tens of thousands of workers. Many of them do exactly the kinds of tasks that AI agents are being built to handle: data entry, customer service, content review, and administrative work.

If those AI agents prove effective, companies with African operations will face the same question: do we use this approach too? The conversation about AI and jobs in Africa often gets framed as a future concern. However, this story signals that the future is arriving on a specific timeline. And other people, in offices most workers will never see, are currently setting that timeline.

What You Can Do Right Now

You do not have to work at Meta for this to be relevant to you. Here are three practical steps that apply wherever you work.

Ask your employer the right questions. Find out what monitoring tools run on your work devices. Ask directly whether any of that data trains AI systems. Also ask whether you can access or request logs connected to your account. You may not get clear answers. However, the answers you do get will tell you a lot about the culture of the place you work.

Treat your work device as a work device. This has always been good practice. It is now more important than ever. Keep anything sensitive on a separate personal device. This includes personal finances, health information, and private conversations. Not because your employer is necessarily doing anything wrong. Rather, because you cannot always verify exactly what is being captured and when.

If you are looking for a job, ask about this in interviews. The question “does this company use AI monitoring tools on employee devices” is reasonable and professional. How a company answers it tells you something real about how they see the people who work for them. If they have not thought about it at all, that is also a telling answer.

The Bigger Picture

Meta spent more than 14 billion dollars to acquire a stake in Scale AI. Scale AI’s entire business is data labeling. That is precisely the kind of human work that teaches AI models how to do things. Furthermore, Meta has committed 135 billion dollars to AI spending this year. At the same time, it is preparing to cut a fifth of its workforce.

None of those facts are hidden. They are all public. When you put them together, the story they tell is not complicated. The company is investing in tools that will reduce its need for certain kinds of human labor. Moreover, it is using its current human labor to build those tools.

That is not a conspiracy. It is a business strategy. However, every worker and every policymaker should be asking the same questions. What do we want the rules of that strategy to be? Who benefits from the AI that workers’ daily behavior helps to build? And what say do workers actually have in how their work lives get recorded, learned from, and eventually replaced?

Those questions do not have easy answers. But they deserve honest ones. And right now, almost no one in power is providing them.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.