The report, “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights,” catalogs the growing sphere of influence represented by Big Data in society, including employment, higher education, and criminal justice.
With regard to the growth of automation and algorithmic artificial intelligence, the report states:
As data-driven services become increasingly ubiquitous, and as we come to depend on them more and more, we must address concerns about intentional or implicit biases that may emerge from both the data and the algorithms used as well as the impact they may have on the user and society. Questions of transparency arise when companies, institutions, and organizations use algorithmic systems and automated processes to inform decisions that affect our lives, such as whether or not we qualify for credit or employment opportunities, or which financial, employment and housing advertisements we see.The report also notes how algorithmic technology could both bolster and endanger the relationship between law enforcement with local communities:
If feedback loops are not thoughtfully constructed, a predictive algorithmic system built in this manner could perpetuate policing practices that are not sufficiently attuned to community needs and potentially impede efforts to improve community trust and safety. For example, machine learning systems that take into account past arrests could indicate that certain communities require more policing and oversight, when in fact the communities may be changing for the better over time.The White House says it wants to develop a framework for addressing these concerns so that flawed algorithms do not become a socioeconomic problem. The examples cited include the potentiality of people being denied credit and housing due to inaccurate information. The fear is that automated technologies and algorithmic A.I. deployed without human oversight could lead to unfair treatment.
There is an undeniable irony in this position, given that the Obama administration has proudly outsourced many of its military strikes to unmanned drones and autonomous robots. Just in the last couple years, weaponized drones have made strikes in Afghanistan, Pakistan, Yemen and Somalia.
In its seminal Drone Papers, the Intercept reported nearly 90% of the people killed in these strikes were not the intended targets. Thousands of innocent civilians have died because of Obama’s autonomous drones, and his notorious kill list has come under heavy scrutiny for accidents and mistaken targets.
The connection between algorithmic artificial intelligence and drone strikes may appear loose upon first glance. But the principle behind the recent Big Data paper is protecting citizens from out-of-control technology. It’s hard to reconcile this concern with the glib lip service the administration pays to the collateral damage stemming from flawed targeting systems.
This article (Obama Administration Fears Artificial Intelligence and the Reason Is Morbidly Ironic) is free and open source. You have permission to republish this article under a Creative Commons license with attribution to Jake Anderson and theAntiMedia.org. Anti-Media Radio airs weeknights at 11pm Eastern/8pm Pacific. Image credit: Global Panorama. If you spot a typo, email firstname.lastname@example.org.