In the online world, artificial intelligence is making great strides. It helps develop better tools for cyber security, anti-spam, behavioural and sentiment analysis.
As the technology improves, it can do more and different things. A small UK company is validating an AI powered solution to handle harmful content. There are solutions available to tackle certain aspects of it but it is tricky to mix and match several products at the same time, one comprehensive solution is better.
The solution detects, quantifies and validates harmful content and misinformation. The latter can be done through extensive cross-referencing. The harmful content includes hate speech, abuse, insult, threat, sexism, racism, clickbait, aggression, sentiment, opinionated content, toxicity and claims. The suite of algorithms provides real time detection and filtering of harmful signals, alerts to avoid risk situations in inbound and outbound communications. The harmful signals are quantified, to suggest the extent of harm. Other tools allow for actionable content analysis, insight through various correlations, and a scorecard. In addition to risk management, businesses and organisations will also understand their content better.
The company has validated the MVP (minimum viable product) in the lab and is executing pilots with a news aggregator and a user-generated content (UGC) platform.
The company is seeking further uses for license agreements but also integrators and other developers for technical cooperation or joint ventures. The solution can be improved further and/or built into other solutions.