The distinction everyone seems to be talking about here, without using the right terminology, is between descriptive grammar and prescriptive grammar. Basically, descriptive grammar aims to take utterances of speakers of the language and uncover the rules underlying those utterances, while prescriptive grammar aims to provide rules and guidelines for producing those utterances.
So, your student was completely right in observing how native speakers will use 'which'. From a descriptive grammar standpoint, it's not syntactically wrong. In fact, it's pretty much impossible for a speaker of a language to come up with a sentence that's syntactically wrong as long as it's understood by other speakers. From a prescriptive grammar standpoint, whether or not that's syntactically correct depends on which grammar rules you're subscribing to.
As for your hyphen issue, English is very resistant to word pairs becoming compound words. First the word pair needs to enter common usage, then the hyphen is introduced, and eventually, the hyphen is removed. This is in contrast to other languages which add compound words very easily. If English was more like those languages, the whole thing would probably be a single word by now ('parttimejob'). 'part time' is currently in a period of transition between being a common word pair and being hyphenated, so there's a little bit of ambiguity right now. From a descriptive grammar standpoint, the writer should just use whatever comes naturally, and that will reflect how far that word pair is in becoming a compound word. From a prescriptive grammar standpoint, the writer should use whatever the grammar they're following dictates.
It seems you're teaching a prescriptive grammar for the purposes of helping them succeed in academia. Perhaps it might help if you explain the difference between descriptive and prescriptive grammar and explain why you're teaching the prescriptive grammar and why academic writing is the way it is. Not everything can be explained by written vs spoken language and by the explanation that written language needs to be more unambiguous. I mean, a written letter to your mom doesn't have nearly as many of the requirements of writing for an academic journal. While plenty of it is to reduce ambiguity, a lot of it really just is by convention. Some of that convention seems to be kind of pointless, and in terms of producing understandable writing today, it's totally unnecessary. But convention, itself, can be useful. It prevents written texts within a community from going through the constant changes fluctuations everyday speech goes through, and only lets in the major changes the language. In other words, it's more stable. When reading a paper from decades ago, you don't need to be familiar with all of the language trends from that time. No need to know what 'groovy' means or whatever other terms and phrases they were using back then that never caught on in the long run. Additionally, by having a standard, it makes your writing more accessible to speakers of a variety of dialects of English. It means everybody will need to learn how to conform to the standard, but the benefit is that they don't need to struggle to understand the writer's strange writing style every time they read a new paper - they only need to learn ONE variation of English. Speech patterns and spelling rules that are wide-spread enough and used consistently enough will eventually make it into the standard. These properties are really important in academia, so that's why they use their writing conventions.