MIT fed an AI data from Reddit, and now it only thinks about murder

Norman is a disturbing demonstration of the consequences of algorithmic bias
By Bijan Stephen Jun 7, 2018, 11:11am EDT

For some, the phrase “artificial intelligence” conjures nightmare visions — something out of the ’04 Will Smith flick I, Robot, perhaps, or the ending of Ex Machina — like a boot smashing through the glass of a computer screen to stamp on a human face, forever. Even people who study AI have a healthy respect for the field’s ultimate goal, artificial general intelligence, or an artificial system that mimics human thought patterns. Computer scientist Stuart Russell, who literally wrote the textbook on AI, has spent his career thinking about the problems that arise when a machine’s designer directs it toward a goal without thinking about whether its values are all the way aligned with humanity’s.

A number of organizations have sprung up in recent years to combat that potential, including OpenAI, a working research group that was founded (then left) by techno-billionaire Elon Musk to “to build safe [AGI], and ensure AGI’s benefits are as widely and evenly distributed as possible.” What does it say about humanity that we’re scared of general artificial intelligence because it might deem us cruel and unworthy and therefore deserving of destruction? (On its site, Open AI doesn’t seem to define what “safe” means.)

This week, researchers at MIT unveiled their latest creation: Norman, a disturbed AI. (Yes, he’s named after the character in Hitchcock’s Psycho.) They write:

Read more

Data Will Save Music

The writing is on the wall.
The music industry is dying.
Nobody buys music.
It’s the Wild West.
The last one might be true. But the rest? Not exactly.

In the Wild West, the winner of the shootout was always the one who was armed the best and able to take the best shot. Nowadays, artists and executives need to have that same kill or be killed attitude. It’s time to upgrade the arsenal.

Leonardo da Vinci left us with a quote that we can use to bridge the gap of this analogy:

“Principles for the Development of a Complete Mind: Study the science of art. Study the art of science. Develop your senses — especially learn how to see. Realize that everything connects to everything else.”

Science + Art. That’s the future of the music (and entertainment) industry.

Read more

What to expect from business intelligence in 2017

Major Growth

As it looks like Business Intelligence is going to be going from strength to strength in 2017. Organizations in a variety of global markets are planning major investment in their Business Intelligence strategies this year. Over the pond in the UK, over three quarters of small to medium sized enterprises are planning a major analytics or data project this year.

Where there is investment, there is research; and where there is research, there is innovation. This means that we can expect some exciting steps forward this year, as organizations stumble over themselves to stay on the cutting edge of the discipline.

Data Diversity Is the Order of the Day

In order to keep ahead of the curve, businesses in 2017 are turning their attentions to a variety of different sources from which to draw their data. After all, why limit your insight when we are practically adrift in a sea of data and understanding?

If you can find a way to connect it to an analytics platform, it is a data source. This means, businesses now have the technology to measure just about everything. Need qualitative data from customer reviews? No problem. Want customer behavior data from a physical product itself? It’s yours. Looking for information on which of your competitors your churned customers have moved on to? Right here.

The fact is, you cannot get too much data, and the greater variety of sources you have for that data, the more comprehensive the understanding you can gain from it. This is why datasets and sources will be becoming increasingly diverse in 2017.

Read more

Areas of AI & machine learning to watch closely

Distilling a generally-accepted definition of what qualifies as artificial intelligence (AI) has become a revived topic of debate in recent times. Some have rebranded AI as “cognitive computing” or “machine intelligence”, while others incorrectly interchange AI with “machine learning”. This is in part because AI is not one technology. It is in fact a broad field constituted of many disciplines, ranging from robotics to machine learning. The ultimate goal of AI, most of us affirm, is to build machines capable of performing tasks and cognitive functions that are otherwise only within the scope of human intelligence. In order to get there, machines must be able to learn these capabilities automatically instead of having each of them be explicitly programmed end-to-end.

Read more

One Third of Americans Prefer a Software Robot Over a Human Boss

Digitization and automation are ever-growing topics in relation to the workplace.

A famous Oxford study on the future of employment from 2013 estimated that up to 47% of American jobs may be automated by 2035; a brand new McKinsey study shows that current technologies could automate 45 percent of job activities; and the business mantra goes that if you can digitize, you should digitize to gain a competitive advantage.

But how do we, as human beings, really feel about potentially working with or even for AIs, and what impact do we think they will have on our workplace?

A recent study conducted in the US, UK and Denmark explores people’s openness towards working with and for “unbiased computer programs”—defined as “a software robot that makes decisions or proposals for decisions based on data from HR, financial or market information. The software robot is unbiased, i.e. it is not affected by the personal, social and cultural bias that influence human decision making, but balances all input only based on the data.”

The study shows some surprising results in openness, and big geographical differences.

Read more

Reid Hoffman: A.I. Is Going to Change Everything About Managing Teams

Imagine a spider chart mapping a complex web of interactions, sentiments, and workflow within an office. What would your company look like?

When most of us think of artificial intelligence in the workplace, we imagine automated assembly lines of robots managed by an algorithm. LinkedIn’s Reid Hoffman has a different idea.

In an essay for MIT Sloan Management Review, Hoffman describes human applications for the technology. Among other things, it would help to use data science to improve the way we onboard new team members, organize workflow, and communicate about performance. Addressing the question of how technology will change management practices over the next five years, Hoffman explains how the use of a “knowledge graph” will become standard management practice.

Read more

Preparing for the Future of A.I.

There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

Read more

Facebook Is Building AI That Builds AI

DEEP NEURAL NETWORKS are remaking the Internet. Able to learn very human tasks by analyzing vast amounts of digital data, these artificially intelligent systems are injecting online services with a power that just wasn’t viable in years past. They’re identifying faces in photos and recognizing commands spoken into smartphones and translating conversations from one language to another. They’re even helping Google choose its search results. All this we know. But what’s less discussed is how the giants of the Internet go about building these rather remarkable engines of AI.

Read more

Machines Won’t Replace Us, They’ll Force Us to Evolve

For all of human history, we have created tools that help us do what we want to do — faster, better, cheaper. But we have always had to direct those tools; tell them exactly what to do for us to achieve our goals. This hasn’t changed from the time of stone tools (which we had to wield with our hands) to modern digital design tools (which we wield with the click of the mouse).

Read more

Why image recognition is about to transform business

At Facebook’s recent annual developer conference, Marc Zuckerberg outlined the social network’s artificial intelligence (AI) plans to “build systems that are better than people in perception.” He then demonstrated an impressive image recognition technology for the blind that can “see” what’s going on in a picture and explain it out loud.

From programs that help the visually impaired and safety features in cars that detect large animals to auto-organizing untagged photo collections and extracting business insights from socially shared pictures, the benefits of image recognition, or computer vision, are only just beginning to make their way into the world — but they’re doing so with increasing frequency and depth.

Read more