Bit late to this discussion, but hey.
I probably have very simplistic view about speech models, I tend to think about it as markov chains, in reality they're much more complex. This simplification is handy with an example. If you have a parrot at home and let it listen to radio while you're doing your day job. When you come back home you can listen to it repeating amusing things he learned while you were away, but you don't jump to conclusion you can have meaningful conversation with it. Speech models can generate much more coherent conversation than parrot, but nevertheless, I think, that Googler, that freaked out, I think he let himself being fooled. I'll try to explain why.
Say ML is an algorithm that given input/output finds the function s.t. it can reapply this function on similar inputs. Does it mean it is self-aware? No, not even a chance. ML model learns one exact thing, contrary to Universal AI which learns to learn. And that's what makes it different kind of AI, that's when problems of ethics arise. Because what if Universal AI chose to learn itself wrong thing, like infamous robot Bender used to in Futurama series?
qsl Skipped right past the "thinking about itself" (oh really? And how did you make this determination?),
Biologists run tests on animals to figure out which are self-aware (spoiler alert: some are, some arent). They put mark on their face and put a mirror in front of it. If the subject tries to remove the mark this is a sign of being self-aware. Is universal AI self-aware? Probably, but not before it develops consciousness. And developing consciousness is learning about one's identity. The AI that recovers from some kind of outside intrusion is most definitely self-aware. Recovery is possible if and only if AI learned to identify intrusions from outside, and necessary condition to it is learning one's identity, the idea about self and the world, or self-awareness.
ML models are reflexes of real life, and there's huge gap from reflex to making decision (thinking). Developing consciousness aint task for no ML model, but for AI that chooses what to learn. Dr. Joscha Bach is the brightest lad I know who explains what is possible and how: his Ghost in the machine talk is most entertaining introduction into problems of consciousness and Universal AI I had ever listened
Edit: grammar.