The qualities and flaws of directly trained expert systems are well known.

Ok, so what about systems that learn on their own? What are their features and qualities? Well, we don’t have as many examples of this, as these are much harder to understand, but we do have some:

We are getting some results in both areas, with some systems, like machine translation, completely on the side of supervision and rewards, and others, like Generative Adversarial Networks (that form the basis of Deep Fakes), mostly on the side of individually generated rewards. But both have this inherent conflict with our true goals.

But if you look at the history of GANs, they are extremely hard to train, requiring a 5 years of fast-paced, intensive research, and enormous hardware resources to get to their current stage. And ultimately, even this technology, which more than any other, I would say, changes our understanding of what computers can do, fits very poorly into our lives, and people are still wondering what practical purposes it can have. After all, its goal is precisely to fake images. To generate things which are very close to real, but not identical to anything else. But we have no way of measuring that closeness, and we have reliable ways of discerning a generated image from a real one. So it still seems gymicy.