Bottlenose today announced the next version of its Nerve Center platform that aims to automate some of the work data scientists do when analyzing high-volume data streams.
A major issue with big data in the enterprise these days is the difficulty in hiring data scientists -- a challenge seems to be rising in tandem with volumes of data that need to be analyzed. Bottlenose's platform is designed to automatically discover patterns, trends and anomalies in streaming data, such as monitoring the dark Web to detect a potential security breach even before the target company's systems do. Nerve Center 3.0 has added machine learning to its capabilities.
"There is going to be more and more commoditization of data science and machine learning," founder Nova Spivack predicted during an interview last week, saying the process has been going on for decades and "we're about half-way through."
Bottlenose Nerve Center data-analysis tool.
He believes there's a space for what sees as mid-level products -- something less expensive than CIO-level products with seven-figure pricetags yet providing more automated insights than desktop self-service BI tools.
What about organizations that can't afford the cost of platforms in the tens of thousands of dollars? He thinks the industry is "probably a few years away from the hundred-dollar price point."
The next step in driving down costs will be launching vertical applications such as security threat-detection for a specific type of system, which might cost a few thousand dollars per seat or less per year. It might be 2018 to 2020 before seeing that drop into the hundred-dollar range, he said.
The two issues to solve in order to drive down costs? One is creating a user interface that is easier to understand for more casual users, he said. The other is lowering the cost for the massive amounts of compute power needed to perform real-time analysis on big data, something that has been happening and is likely to continue.
Spivack added, somewhat provocatively if you're a data scientist, on Twitter: "I think that AI will disrupt data science and BI because the tasks in data science are so computational and repeatable."
("One edit," he added later: ..because many of the tasks are so computational and repeatable (not all of them). "But this could free up 80% of their time for the hard stuff.")