Distributed Systems : Course feedback

First off, the response percentage was fantastic, 34 answers from 59 registered
and 53 participated on the course! A big thank you for the effort!

This was my first regular lecture course so it was very encouraging. :) The main
delay in my response was actually that I needed to consult someone who had seen
more course feedback than just one set to interpret the results the system
provides. (I still have trouble believing how well it went.)

Sneaky hidden learning goals on ambiguity tolerance:

Two central concerns that came up in the comments were the ambiguity of
concepts, and, closely related, that exercise questions where different people
get different answers and the course staff stubbornly refuses to say which is
right by providing model answers. (We provided example peer answers by a student
on the course for some of them when they were available.)

Concepts are a bit like contracts between people for the purpose of
communication. We need abstract concepts to discuss complex real-world
phenomena, and they are kind of like boxes we try to fit the world into. Some of
the things we discuss are actually complicated, and fit pretty poorly into their
conceptual boxes when you start to think about it. (I can symphatize: it hurts
my head too.) That said, a big part of teaching is to simplify matters just
enough and no further, and I'm still looking for the optimal level, particularly
for some of the concepts that I boggle at myself. I'll keep this in mind!

As for the exercises, what I believe hurts everyone's head here is that we're
dancing on the borderlands between the abstraction and the real world, so we can
feel the box changing its shape whenever we add an assumption or take one
out. We can get model answers that are "objectively true" if we abstract out
details for so long that we get a nice, cubic, symmetrical box. But we did not
get away with this kind of exercises on this course, no, your specialization
line teaching staff are unfortunately entirely too evil for that. The reason for
this is in our need to survive the messy real world as well: if we have a world
that's fitted inside one of these nice, abstract boxes, we can learn all kinds
of things about the box, like how they stack very nicely. But then we go out and
find later that the world actually a geoid, and while you can stack boxes,
geoids make a big mess if you try to put them on top of each other.

So boxes aside, I certainly could have told you that the exact correct method
for setting up an emergency response team's communication network is like
this. But I was a bit worried that you'd believe me, because sometimes the
lecturer hat messes up with people's heads that way. So I didn't, in the hopes
that your own brain would remain functional and tolerate uncertainty throughout
your university studies.

Some random organizational notes:

1) Lectures on Monday at 10: I wouldn't maybe call it "inhumane" as one comment :),
but yeah, I was lecturing half-asleep too so don't worry if it felt hard to stay
focused.

2) Having the course over two periods instead of one: The "stretching" has been
done in response to notable problems on earlier years when it was on just one
period. I don't think we want to return to that, on average I got the impression
this worked out better.

3) There was too much, suitably and too little coding on the course, and a notable
chunk of the early workshops (paja) were bash basics. We took a bit of a new
(and partially improvised) approach to practicing coding, and the scaffolding of
the workshop reacted to observed needs. We will see how people do in the
Distributed Systems Project this year, hopefully better than last year.

4) The homework was converted from groupwork to individual work this round to avoid
common problems with learning the rules of plagiarism escalating to the entire
group. As a result I simplified the tasks quite a bit too to ensure people
wouldn't just optimize their time and skip them altogether, so I actually
reacted positively to hearing that the course is still "too much work". ;) The
challenge is fitting particularly the coding exercises to the fact that some of
the students swim smoothly in messaging code and others struggle with the bash
basics.

5) Homework deadlines were scattered unevenly; the delay with the first exercise
task itself coming out was entirely my bad (I struggled with establishing the
correct workload balanced with pedagogical value), but the first two deadlines
were deliberately aimed to support and guide people to working individually
outside the lecture weeks (this is important to avoid student burnout). I pushed
the last deadline just past the exam week too, but was maybe a bit too afraid of
the impending Christmas in the end.

The concentrating on lectures study:

We participated briefly in Juha Taina's study on how well people are able to
concentrate on lectures. The method was that students marked down a cross on a
sheet when they noticed they weren't paying attention; the answers were
anonymous. The number of answers isn't high enough to draw big relevant
conclusions on human nature, but it seems to match expectations.

On the 5th, we had a theoretical lecture with about six "conversation breaks"
(discuss topic X for about two minutes with the person you are sitting next
to). On the 12th, we had a more practical topic with fewer breakers. The graphs
below count the total number of crosses ("number of students that passed out"
;)) on a given 5-minute slot. We had a third collection round too just in case,
but after the commitment of the two lectures had been done, the number of
answers was so low I didn't screenshot the graph from that. I recall the total
number of sheets returned on both days was 15.

Some concentration graphs from two lectures.

As you can see, it's not easy to maintain concentration over two hours on
lectures, which are by default a kind of passive thing from the student's point
of view. The break gives your brain a valuable breather, and so do apparently
also the conversation pauses since the curves had more 'dips' than expected.

The study was repeated on a few other courses and in general, you get a kind of
soft 'm' shape in the curves in the standard format when there's a break in
between; the hardest moment to maintain concentration is around
mid-lecture. Except when you have a conversation break or few around there,
apparently. :)

As a final remark, I'm a bit concerned how making notes is falling out of fashion. I
understand that people consider it more important to be able to listen freely
than to write like mad to not miss anything. But once you don't have to
cover everything in the slides with your notes, the writing has a much more
important benefit: when you summarize things into shorthand notes that you never
ever even read again, the material sticks to your brain much better than if you
just try to memorize it by listening. And it helps with staying awake too!

Some numbers for the closing:

I am a bit confused that no one complained in the end about the strictness of
the lecturer in fully enforcing deadlines (late = 0 points) and suchlike. (For
the sake of posteriority: the Ukko cluster was unstable/down for days close to
the first programming exercise deadline, and we just lived with it.) I'm curious
if this is something that won't come up in course feedback in general, or if
it's been accepted over the discussions we had. The inherited clever scoring
system probably helped here, though: it's possible to get 12+12+42 points from
the course out of a maximum of 60, so even if you miss some of the points for
whatever reason, you can still get a 5. As a result, I at least felt that
everyone had space to miss something for whatever personal reason, and it didn't
require individual lecturer judgement on whether this was a "valid" reason of
absence or not. Fairness is a big concern for me.

The average grade given to the course was 4. I'm happy most people felt the
course goals were clear (avg 4,1), and the material supported it (4,1). There
was a bit more polarization on whether the course activities supported learning,
with a noticeable spike on "maybe", but very little actual disagreement (avg
4,1). The questionnaire was advertised before the exam, so my assumption is that
the grading evaluations for the most part are estimates based on the structural
division where you can get a pretty large amount of the points by working
beforehand (avg 4,2). On the laborousness of course, opinions were pretty evenly
distributed, with a spike on "yes, it was", so the result is apparently around
department average for master's level courses (avg 3,6).