Theresa May has been forced to ditch whole chunks of her party's manifesto in the wake of the election, but one of the key non-Brexit policies to survive is the plan to crack down on tech companies that allow extremist and abusive material to be published on their networks. The recent terrorist attacks have strengthened the arguments of campaigners who've long said that it's far too easy to access this kind of content and have accused internet companies of wilfully ignoring the problem. The promised "Digital Charter" will aim to force those companies to do more to protect users and improve online safety. With the growing power of tech giants like Google, Facebook and Twitter, connecting billions of people around the globe, is it pie in the sky to promise that Britain will be the safest place to be online? On one level this is a moral argument which has been going on for centuries about what we should, and should not be allowed to read and see and who should make those decisions. But is this a bigger problem than freedom of speech? Have we reached a tipping point where the moral, legal, political and social principles that have guided us in this field have been made redundant by the technology? Do we need to find new kind of moral philosophy that can survive in a digital age and tame the power of the tech-corps? Or is the problem uncomfortably closer to home - a question that each and every one of us has to face up to? Tim Cook, the chief executive of Apple, recently said that he was concerned about new technologies making us think like computers "without values or compassion, without concern for consequence." Witnesses are Nikita Malik, Tom Chatfield, Mike Harris and Mariarosaria Taddeo.
view more