With an AI back end, could we create an anti-abuse OS?

An OS-level feature could not only help us become better people, it could help us create a better world.

5 behavior analytics
Thinkstock

This is going to seem like a huge stretch, but hear me out, because I think operating systems could become a far more powerful tool to help us moderate our own behavior than they now are.

The reason I’m starting with the OS – and it could be any OS – is that it is pervasive, and it is largely within our control.  Currently, much of the monitoring that surrounds us is designed to prove wrongdoing or capture information that could be used against us.  But what if the OS had the capability to warn out about things that would do us harm?

We’ve now build in virus protection into the OS, something that seemed impossible a decade or so back. Why couldn’t we use something like a modified key logger to provide behavior protection and flag everything from abuse to extreme depression?

Let me walk you through what I’m suggesting. 

Information as an asset, not a liability

We have a lot of malware in the market and there are already policies and products that can scan email looking for terrorists, criminals and pedophiles. Most if not all these tools are designed to either do us harm or to create an evidence trail that would result in a conviction or adverse employment action like termination for cause. They are effectively capturing a ton of data about us for the almost exclusive use of doing us harm.

But there is nothing inherently good or bad about data.  The same data that could inform a pedophile of a target could also be used to identify a child as a victim, identify behavior that was drifting toward violence, or the beginnings of abuse.  

Tied to an AI, this same kind of hostile data stream could be used to allow us to correct bad behavior – not only in our children, but in ourselves.  Much like we provide dashboards to management, we could create a dashboard for parents and ourselves that provided a running estimate of the quality of our interactions, the effectiveness of our arguments and whether we are trending to be a good or bad person.

We know that even young adults don’t have fully grown brains and don’t fully understand consequences – but they could be backstopped by parents, or even their own focused dashboard. That dashboard could point out consequences that they otherwise wouldn’t realize until they were experiencing them real time.

Digital assistants

I’m not just talking about PCs. We are increasingly surrounded by digital assistants that are always listening. Our concern, rightly, is that what they hear could be used against us. But what if what they hear could be used for us?  Let’s say there is an escalating argument between a spouse that is leading toward violence. Backed by an AI, they could determine the nature of the problem and then apply the most likely remedy. It might be to turn up their volume and tell a pertinent joke, it might be to set off an alarm, and it might be to alert the kids and/or the authorities, thereby preventing an event rather than just capturing the evidence of it. 

Why the OS and not an app?

The reason for this is that the OS sits underneath all the apps and you can’t be sure where a behavioral problem will first emerge.  It might as easily be on social media as email, on instant messaging as on a collaboration tool, or within a game. You’d want the protection to be all-encompassing, so that the tool would catch the behavior early enough…so the user could potentially self-correct it. Because, if the solution just called parents or the police, there is a near certain probability that it would be turned off.  There certainly could – and would – be elements of this that would provide information to employers and the police. But what makes this different than other monitoring efforts is the goal is to protect the user from making a mistake…not just help capture the data needed to fire or convict them.  

Wrapping up: reduced sales

There are a lot of tools currently being used to capture information to do us harm.  Variants of many of these tools could be built into the OS to create a dashboard that could not only prevent mistakes, but help us grow into better people. With an AI background, the dashboard could help us argue our positions better, help us manage our anger more effectively and prevent us from becoming the next Harvey Weinstein. 

They would do this by pointing out emerging behavior trends, identifying potential scams that are either phishing for information, or causing us to make decisions against our own best interests. 

We are getting more and more used to constant monitoring, but this kind of a feature – which should and, I expect would, be opt-in – will create privacy concerns.  Those concerns would certainly create an unavoidable, at least initial, drag on sales.

But think of the folks that would be most concerned.  Wouldn’t these largely be folks that didn’t care their news was fake, that were afraid of being caught or had dubious ethics? Wouldn’t you want most of these folks to use someone else’s product anyway?

Or, put more succinctly, given we are being monitored anyway, wouldn’t we prefer a product that at least attempted to put our needs ahead of those that wanted to do us harm over one that did not?  Perhaps an OS feature could not only help us become better people, it could help us create a better world.

 

This article is published as part of the IDG Contributor Network. Want to Join?

5 ways to make Windows 10 act like Windows 7
  
Shop Tech Products at Amazon