How does it handle censorship?
Each person that adds or pins ("seeds") any content would presumably be legally liable in their jurisdiction for hosting that content the same as they would be hosting that content on a regular web server. So, probably no more or less censorship resistant than the web of today. However, IPFS would make it much easier for me to, for example, add content that my own government might want to censor, and then tell people outside of my country to pin it, and then I could stop pinning it and perhaps no longer be liable for it (if no one knows that it was added by me).
The devs maintain a list of content added to IPFS that have resulted in DMCA takedown notices, so that people can avoid pinning it.
The real win for IPFS is making publishing any kind of file over the open web more accessible to everybody and sharing the hosting load with others, rather than avoiding censorship (although perhaps we could say that IPFS could help to avoid censorship to the same extent that bittorrent does).
Could IPFS have some kind of P2P DNS system? Could it set up something like .onion sites that are resistant to takedowns and DDoS attacks?
By default, every file and directory added to IPFS is given a hash, which makes that content accessible at /ipfs/`hash`. Like this. Everything added to this is immutable, meaning that link will always point to that exact file. If the file is changed and re-added, it will have a different hash. The mutable layer added on top of ipfs is called IPNS (inter planetary naming system), which you can use to always point to the latest version of your files (like a website, or the latest version of a software repository) with a different hash. Like this. Human-readable names can then be mapped on top of the IPNS name (with DNS or any other naming scheme). Like this.
The DNS gateways to IPFS (like ipfs.io and ipfs.pics) are stopgaps between HTTP and IPFS. Ideally for IPFS, everybody should be running IPFS so that they can add and pin content, and access content on IPFS without going through a http gateway, which is centralized behind a web server and DNS, so you have to trust the gateway controller to faithfully serve the unmodified content (same as a typical website). Any website could use IPFS on the backend (either like a gateway, so that it's visibly using IPFS, or just as a storage mechanism for files, like you suggest with .onion).
If you're accessing IPFS natively, then it's absolutely mega resistant to DDoS attacks, but a gateway or site with IPFS behind a server is susceptible like any other website.