Lets say you build a device that can recognise 99% of terrorists and criminals and has a 1% false positive rate(the chance that a user who is not a terrorist gets flagged). Keep in mind that these are outrageously high numbers and no known algorithm can even begin to approach them.
Now lets say this device flags a particular person. What is the probability that this person is a terrorist?
India has 300 million internet users. Lets say 1 million of them are terrorists(again, an outrageously high number).
The number of actual terrorists that would be flagged by this device is `99% of 1 million` which is approximately 1 million.
The number of false positives is `1% of 299 million` which is approximately 3 million.
So total number of flagged users is 4 million, out of which 1 million are actually terrorists. This puts the probability of a flagged person being a terrorist at `1/4` or 25%. The probability that a flagged user is not a terrorist is 75%, more than the probability that a flagged user is a terrorist.
Now let's do the same calculation with more realistic numbers while keeping the number of terrorists inflated. Suppose the algorithm has a 20% false positive rate and catches 50% of terrorists. This puts the probability of a flagged user being a terrorist at 1/120 or less than 1%.
Edit: I'm going to edit in a response I posted here
OK, I've been seeing a lot of responses saying that 1/120 is a good chance and false positives occur everywhere.
Firstly, in all the examples I have used I greatly exaggerate the situation. Nothing in the real world would give even close to a 1/120 ratio.
Secondly, even if you persist with the 1/120 figure, one terrorist is practically worthless.
Lets conduct an analysis which is largely independent of the number of terrorists(however, assuming it is less that ~1million).
Since 1994, there have been roughly 35,000 civilian and military deaths due to terrorism in India~(Do note that the number who died in recent years has declined to a fifth of the numbers circa the year 2000).
Extrapolating from this, lets say India will suffer an additional 35,000 deaths due to terrorism until 2035. Lets assume all the terrorists who would be responsible for these are active today.
If we persist with our 20/50 rates, mass surveillance would give you a pool of 60 million people that includes 50% of terrorists.
Now the big assumption we are making is that the number of deaths scales linearly with the number of terrorists. This is most probably not true. However, when dealing with large numbers I think it should even out. Also, it should be balanced by all the other assumptions I am making that increase the success rate.
Detailed(not mass, we have already done that in order to isolate these people) surveillance of 60 million people has the potential to stop 17,500 deaths in India over the next 20 years. That means to save one life, you need to scan 3,500 potential terrorists which your system flagged.
Lets say the lowest level policeman can do two of these people every day(not 4 weeks per person as /u/notfrommumbai suggests). The lowest salary a policeman in India gets is Rs 16,000($250) per month or $8.3 a day. It would take $14,525 to save a single human life.
Keep in mind that the GDP per capita in India is $1,500. The annual per capita income in India is $500
And all this doesn't even take into account the errors that would be committed by a lowly police officer being paid $8.3 per day.
This is a great breakdown! All too often people fall prey to what's called neglecting the prior or the base rate fallacy. Essentially people equate the probability of detecting a terrorist with the probability that a person is a terrorist given that they've been detected, ignoring the fact that the prior probability of being a terrorist is very low.