"Everyone sharing his or her data to train A.I. is great if we agree with the goals that were given to the A.I. It’s not so great if we don’t agree with these goals; and if the algorithm’s decisions might cost us our jobs, happiness, liberty or even lives.
To safeguard ourselves from collective harm, we need to build institutions and pass laws that give people affected by A.I. algorithms a voice over how those algorithms are designed, and what they aim to achieve. The first step is transparency. Similar to corporate financial reporting requirements, companies and agencies that use A.I. should be required to disclose their objectives and what their algorithms are trying to maximize — whether that’s ad clicks on social media, hiring workers who won’t join unions or total deportation counts.
The second step is participation. The people whose data are used to train the algorithms — and whose lives are shaped by them — should help decide their goals. Like a jury of peers who hear a civil or criminal case and render a verdict together, we might create citizens’ assemblies where a representative randomly chosen set of people deliberates and decides on appropriate goals for algorithms. That could mean workers at a firm deliberating about the use of A.I. at their workplace, or a civic assembly that reviews the objectives of predictive policing tools before government agencies deploy them. These are the kinds of democratic checks that could align A.I. with the public good, not just private power.
The future of A.I. will not be decided by smarter algorithms or faster chips. It will depend on who controls the data — and whose values and interests guide the machines. If we want A.I. that serves the public, the public must decide what it serves."
https://www.nytimes.com/2025/11/02/opinion/ai-privacy.html?unlocked_article_code=1.yU8.8BEa.DltbW_WwVhxN&smid=nytcore-android-share