News

A 2019 study found that an algorithm widely used in the U.S. for making decisions about enrollment in health care programs assigned white patients higher scores than Black patients with the same ...
Under the right circumstances, algorithms can be more transparent than human decision-making, and even can be used to develop a more equitable society.
Take a recent example in New York City: The police department has begun using algorithms to help decide where to deploy officers across the city. In 2015, the New York Police Department performed ...
Often, when there’s talk about algorithms and journalism, the focus is on how to use algorithms to help publishers share content better and make more money. There’s the unending debate, for example, ...
For example, a cohort of less than 15 students that is excluded from being subjected to the algorithm is probably either a class in a private school or studying a less popular subject.
In 2018, Google DeepMind's AlphaZero program taught itself the games of chess, shogi, and Go using machine learning and a special algorithm to determine the best moves to win a game within a ...
For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes.
For example, an algorithm such as COMPAS might purport to predict the chance of future criminal activity, but it can only rely on measurable proxies, such as being arrested.
There are three key reasons why predictive algorithms can make big mistakes. 1. The Wrong Data An algorithm can only make accurate predictions if you train it using the right type of data.
For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes.