There is a debate among us students, that the stuff stated that in the "course world" characters in UTF-8 are represented using only 7 bits (Compared to the real-world that they are represented as bytes. Meaning, 8 bits).
As I have understood it, the course stuff only meant that in some compression algorithms (like LZ) that we intend only to compress ASCII chars, we could represent each char in 7 bits (Because there are only 128 ASCII chars) and use the extra bit for signaling if this is a repreat element or not.
As I have understood it, the course stuff still mean that even in the "course world" characters in UTF-8 are represented using 8 bits. Am I right? Or is it that in the "parellel world of the course", chars are represented as 7 bits?
(The question poped in my mind when calculating the formula in HW6 Question 2).