Me, I'd put good money on "not accurate at all". At least, not as folks generally understand "self aware". There have been beat-ups about "self-aware" strong AI algorithms since the Eliza program in the 1960s. As a teen growing up I read Joseph Weizenbaum's "Computer power and human reason" - the wikipedia article doesn't do it justice, a lot of his argument was that weak AI, i.e. building intelligence along classical programming lines, with codified rules and algorithms, is reasonably straightforward; but writing an algorithm to simulate understanding a set of logic rules is quite different from strong AI, where you are trying to build something that is more like human consciousness. From some digging it seems the core algorithm here is Deontic Cognitive Event Calculus which looks very cool as a way of codifying and modelling logic and self-awareness and the like. But there's a huge gap, in my mind at least, between being able to model and simulate self-awareness, and actually being self-aware as most people would understand it.