AI is great but we do need supervision

By Mark Kuijper

It would be going too far to say I’m a big Netflix fan, but every once in a while I’m ‘gripped’ by a well-done series. What I enjoy most is sharing my excitement about the plot twists and turns with family or friends. The question is whether that will still be possible in a few years’ time. I can imagine that everyone will soon have their own personal storyline, geared to their own interests. Artificial intelligence (AI) can make this happen.

I know – there’s a lot of scepticism about this technology in the Entertainment and Media sector. And that’s entirely understandable in an industry whose businesses derive their right to exist and their added value from human ingenuity and sharing people’s creative ideas. The E&M sector’s most successful products and services come about when creative content is linked to brands and the target group’s expectations.

Our surfing behaviour determines the advertisements we see

And yet AI is already ‘entertaining’ us. Just think of the Facebooks and Googles of our world. They offer us personalised suggestions and promotions based on our web surfing behaviour. That in itself is quite convenient, because I’m not interested in advertisements for women’s fashion or skin care products.

But AI can also aid genuine creativity. Two years ago, Sony in Japan released the pop tune ‘Daddy’s Car’. If you didn’t know better, you’d think it was a lost recording by The Beatles. In reality, Sony produced the song using an AI system called Flow Machines.

When will an AI system create a hit song?

How long will it be before a song composed by an AI system heads the Top 40? It’s a valid question. In a survey that PwC conducted last year among E&M managers, a quarter of the respondents expected that AI would be able to create a hit song by about 2025.

loading-player

Playback of this video is not currently available

Algorithms will impact personal lives

I think that human beings should, in any case, remain in charge in using AI. In addition to its advantages, AI also has a downside. There is a risk that AI will itself start making AI – that robots will start to build robots. Algorithms are already selecting job candidates, deciding whether applicants qualify for a mortgage, or picking out people at airports who might be terrorists. In the future, influenced by machine learning and other forms of AI, algorithms will increasingly take decisions that impact our personal lives.

In my opinion, those algorithms should be subject to certain rules and independent supervision. The EU’s new General Data Protection Regulation already takes a step in that direction. The new rules require businesses to explain the decisions taken by their computers. People who believe that they have been treated unfairly by an algorithm can therefore seek redress. But businesses are obviously wary of revealing the way in which their algorithms work, for competition reasons. The solution may lie in an independent third party, such as an accountant.

Supervision based on trust

It would be a good idea to make agreements about this as soon as possible. Some aspects can be legislated, but businesses must also acknowledge their responsibility in this context. A code of conduct would be appropriate, in my opinion. Organisations could also work with the supervisory authority to draft industry-wide guidelines. In this way, we are able to trust AI more and get its maximum benefits.

Contact us

Mark Kuijper

Technology, Media and Telecom Industry Leader, PwC Netherlands

Tel: +31 (0)88 792 54 38

Follow us