US Lawmakers Demand Transparency and Human Oversight in AI Workplace Tools

Top US lawmakers have pressed for greater transparency and human oversight as companies expand the use of AI tools that shape pay, schedules, and performance reviews. At a House hearing, concerns were raised about AI being used to monitor workers, suppress organizing, and potentially worsen bias. While some experts urged caution against rushing new laws, others called for updated data collection and stronger enforcement to address emerging harms. The hearing concluded with broad agreement on the need for better evidence to guide policies that protect workers while fostering innovation.

Key Points: US Congress Pushes for Stricter AI Rules in the Workplace

  • AI is already used for pay, schedules, and monitoring
  • Lawmakers warn of worker privacy threats and bias
  • Calls for transparency and human review of AI decisions
  • Experts debate pace of new regulations vs. existing laws
3 min read

US Congress presses for stricter AI workplace rules

Lawmakers warn AI is shaping pay, schedules, and reviews, pressing for transparency, human oversight, and better data to protect workers.

"AI is no longer science fiction. - Congressman Rick W. Allen"

Washington, Feb 4

Warning that AI is already shaping pay, schedules, and performance reviews in workplaces, top US lawmakers have pressed for transparency and human oversight as companies expand the use of new tools.

At a House hearing titled 'Building an AI-Ready America: Adopting AI at Work' on Tuesday (local time), lawmakers questioned whether current labour laws and data systems can keep up.

Congressman Rick W. Allen, the panel's chairman, said AI "is no longer science fiction." He said it is already transforming industries. Allen said that Congress must protect workers while allowing innovation and growth.

Allen said policymakers need better data. He said federal agencies must track how AI is changing work. That data, he said, is needed for sound policy decisions.

Rep. Mark DeSaulnier, the top Democrat on the panel, said the risks are real. He warned that some employers use AI to monitor workers and suppress organising. He cited tools that track bathroom breaks and screen activity. He said such practices threaten worker privacy.

Bradford Kelley, a labour and employment attorney, urged caution. He warned against rushing new laws. Poorly written rules, he said, could slow innovation and hurt US competitiveness. Kelley argued that existing laws already cover most abuses. He also said conflicting state rules are creating confusion.

Labour economist Revana Sharfuddin said lawmakers face a data gap. Current federal statistics count jobs, she said, not tasks. AI often automates parts of a job, not the whole role.

"The job still exists," she said, "but the work has changed." She called for updated surveys to measure how workers use AI.

Tanya Goldman, a former worker protection official, said harms are already happening. She said employers use AI to set wages, manage schedules and monitor performance.

These systems, she said, can worsen bias and push unsafe work speeds. She warned that constant surveillance can chill protected activity.

Goldman called for stronger enforcement of existing laws. She also urged new safeguards aimed at AI. Those should include disclosure, human review of key decisions and testing for bias. She said states should be free to adopt stricter rules.

David Walton, a management-side attorney, said AI use has surged across hiring, safety and compliance. He said many employers are building internal controls. These include bias testing and keeping humans involved in major decisions. He said such steps can protect workers and boost efficiency.

Walton said workers need clear explanations. Without buy-in, he said, employees may work around the systems. Early input and feedback are key.

Democrats said enforcement agencies lack resources. They said agencies need staff and technical expertise to review complex AI systems. The hearing ended with broad agreement on one issue.

Lawmakers said better data is needed. Allen said policy must be guided by evidence so workers and employers both benefit.

- IANS

Share this article:

Reader Comments

R
Rohit P
While I agree with the need for safeguards, the attorney Bradford Kelley has a point. Over-regulation can stifle innovation. India is trying to become a global tech hub. We need balanced policies that protect workers but don't tie the hands of our startups and IT sector. Let's learn from the US debate.
A
Aditya G
Tracking bathroom breaks? That's just inhuman. 😠 This kind of surveillance culture is creeping into some Indian workplaces too, especially in the gig economy for delivery partners. Strong laws are needed, but more importantly, we need a cultural shift where companies value dignity over just metrics.
S
Sarah B
The data gap mentioned is the real issue. In India, we have a massive informal workforce. How will AI impact them? Our policy discussions often focus only on the formal sector. We need inclusive data that looks at all kinds of work, from a street vendor to a software engineer.
K
Karthik V
Completely agree with Tanya Goldman. Bias in AI is a huge concern. If an AI is trained on historical data from a biased society, it will perpetuate those biases in hiring and promotions. India, with its diverse social fabric, needs to be extra cautious. Disclosure and human review are basic first steps.
M
Michael C
David Walton's point about worker buy-in is key. You can't just impose a new AI system from the top. In my experience in multinationals here, systems fail when employees don't trust them or understand them. Early feedback and clear communication are essential for any tech adoption, AI or otherwise.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50