Quote: the only thing is it hasnt been released to the public en masse.
It actually has in quite some aspects, but people aren't necessarily aware of it. E.g. all the autonomous cars use NNs (AIs). AIs handle surveillance cameras to judge people's behavior. Also e.g. in the US criminals get rated about their chance of being relapsing, and therefore their custody gets handled accordingly.
E.g. especially the latter gets heavily criticized as it has been shown to be quite non-impartial, as having a too small sample set, and being overfed with too many non-relevant information.
E.g. an exaggerated example: If you feed too much non-relevant information, then it can happen, that e.g. an AI judges your chance of being a terrorist according to your shoe size, just because most terrorists in the sample set have the same size as you have...
The US also uses AIs to judge if people are potential terrorists and then bomb them to death with drones (and the ones to happen to be nearby). No trial, nothing. Only the AI is the culprit...
They claim very low false positive rates officially, but as the Snowden papers have shown these official rates are bare lies. But who cares here in the west, as we're not the ones who get constantly terrorized.
NNs surely give great opportunities, but many people are not aware, that they do not work like the human logical mind, and therefore have totally wrong expectations, which also can make them quite dangerous in this combination (like the famous Tesla drivers ending in horrible accidents, who were so convinced, that the system works like a human driver, that they blindly let the system have full control, even in situations where the mfg clearly declares it shouldn't be used)
Or e.g. take facial recognition. We have pretty high recognition rates with current systems, but people assume theses systems identify faces, according to features, like humans do. But this is very very wrong. They often distinguish faces, by very strange features, which don't make any sense to the human mind. Just because they have learnt in their sample set, that they can distinguish the faces best by these features. E.g. there was once a famous test, where people just wore glasses with a special frame, out of a printer, with a special pattern. And voilà, the people suddenly got recognized by the system e.g. as Brad Pitt...
No human would ever have fallen for this, but the AI did.
Therefore these NNs are usually horrible in making extrapolations, and depending on the problem, sample sets, network topology and learning algo, they can be also horrible already in interpolating...
I claim not that this is the truth. As this is just what got manifested into my mind at the current position in time on this physical plane. So please feel not offended by anything I say.