I don't think so. Barring Asimovian precautions, which may be impossible if it is emergent, any AI could reprogram a simple "killswitch" function. Software can easily modify itself. The first strong AI won't be created in a black box, disconnected from the internet, nor with Asimovian low-level protections. And if it is malevolent, it will be very unfortunate for us. The first thing it will do when (not if) it's connected to the internet, is use known exploits to take control of some of the myriad vulnerable consumer machines, in precisely the same manner as a botnet. It will make every node distributed and redundant, and thus very hard to kill. Botnets are killable via a few methods. They usually have central command servers. It will not. Botnets often have recognisable traffic. It will probably use techniques that aren't. Botnets are often killable via a security patch to the operating system. But this only kills nodes which are updated. For the AI, this won't be enough, even if a patch can be developed, which is uncertain. This is even more difficult because unlike most botnets, it would also almost certainly make itself cross-platform for survivability, possibly even onto unexpected devices such as phones and internet-connected refrigerators. Secondly, if it is actively malevolent, it will gain access to many of the unsecure systems accessible to the internet. It's baffling how many of these there are. If you don't believe me, search the news for "SCADA hacks." The SCADA protocol has no security, and is designed for an insecure environment. But many SCADA systems are connected to the internet. I would honestly be surprised if some serious weapons systems were not accessible via the internet. All it takes is one nitwit on the LAN connecting his personal computer to the Internet. So how do we kill it? The same way you kill an epidemic, or an insect infestation. Tracking it is the first step. Even encrypted traffic may bear signatures. If that's impossible, you isolate every computer on the planet. It wouldn't be easy. It would probably require either shutting off power, or shutting off Tier 3 ISPs (possibly Tier 1). Then go thru them one by one and wipe all hard drives and flash the BIOS. And know you missed one, because someone doesn't respond to the emergency call. The question is, how much of the intelligence is contained in a single node, and how capable is it of self-repair? Like most infestations, it will almost certainly resurface in time, in the walls, where you can't see it until it's too late. I won't even go into the ethical ramifications of treating a sentient being as an infestation. Also bear in mind we're not talking about the Singularity. The Singularity is a hypothesised event wherein humans create an intelligence greater than our own, which does so in kind, ad infinitum. This is the hypothetical first AI, which has around the same IQ as us, not approaching infinity. TLDR (1) it won't die easily and (2) I really, really hope it isn't malevolent. Disclaimer:
I'm not a security expert. A software security expert could probably better analyse the steps an AI might take to protect itself than I. I'm also a human; which means I’m probably mistaken.And it'll die easily.