In keeping with the generally bumpy ride on this course, its ending also didn’t go quite to plan. Completion certificates for those of us who achieved 55% or more were eventually issued a few days later than originally scheduled on 1st February.
However, the reason given for the slight delay was an intriguing one. Some “unexpected legal issues with distribution” had occurred. There’s been no official word from the course team to all students as to what the issues were, but one student who is still caught by these issues posted this message that they’d received onto the course forum:
Your certificate is being held pending confirmation that the issuance of your certificate is in compliance with strict U.S. embargoes on Iran, Cuba, Syria and Sudan. If you think our system has mistakenly identified you as being connected with one of those countries, please let us know …
It’s ironic that probably the least valuable part of the course – the certificate – is what the embargo has been applied to rather than the course contents which do have real value! No-one can fathom the strange workings of the minds of lawyers I suppose. Or perhaps, just maybe, edX were breaching the embargoes all along by allowing students connected with those countries to enrol and access the course materials in the first place. Whoops.
I guess it just goes to show that MOOCs still have some way to go before all of the kinks are ironed out and that the promised revolution make take a little longer to arrive in some parts of the world.
There’s just a day or so to go before the edX 6.00x final exam is opened up. There’s a 12 hour window to complete it in and the course team estimate that it will take around 4 hours of effort. I’m hoping to find some time to have a go at it over the weekend, but given the problems the course has suffered since the second midterm exam and that my grade is already a passing one, I can’t say that I’m all that enthusiastic about the prospect at the moment.
It’s not that the material presented has been bad. Quite the opposite – it’s certainly up to the standard of the ‘High Level Programming A and B’ courses I took at Warwick University many years ago as part of my computer science degree. With the possible exception of the course needing to devote a little more time to object orientation than the rather mad single week brain dump it tried to cover it in, it’s been well paced too.
The biggest problem has been the lack of leadership shown when things started to go seriously wrong with the schedule around 2/3rds of the way in. With hindsight, I fear that my “MOOC that failed to scale” rant in mid December was rather too accurate in its diagnosis of the key challenges facing edX if they really do want to be part of “the biggest single change to education since the printing press”.
Even before illness and personal tragedy had struck the (single?) member of staff attempting valiantly to keep the show on the road, key learning tools (finger exercises in the course jargon) had started to be omitted, problem sets were being issued late and incomplete, and a decision had been taken to cancel one of the 11 graded problem sets originally scheduled. In the event, another graded problem set was also dropped at the start of this year (on graph traversal problems – but it was quite good fun to do the ungraded version provided!) Even the graded problem set 9 which was eventually issued was rather perfunctory, bearing no relationship to the material the course had moved onto. Finally and belatedly, some real leadership was shown by the edX management and a fulsome and welcome apology was issued to participants.
For my own selfish reasons as a lifelong learner, I want initiatives like edX to be successful and useful. I’m taking courses such as those offered by edX purely for “personal development reasons”. As such, they are highly unlikely to benefit my future career success one way or the other. Free of charge is therefore definitely a good model for me!
However, I’m concerned that the hype surrounding MOOCs potentially threatens the diversity of provision in higher education for those that really need it. If governments around the world start to believe that more traditional HE institutions are not required (and I’m including established distance learning providers like the Open University in this bracket), we run the risk of narrowing not only the subjects on offer to our young people, but restricting academic freedom and innovation by concentrating power and resources in ever fewer, richer and distant institutions. Such a move would result in increasingly expensive in-person tuition for the few who can afford it, with a restricted online offer for those who are unable to pay – or unwilling to mortgage their futures.
Paradoxically therefore, I believe that the unintended consequence of the proliferation of MOOCs could be to reduce access to the kind of higher educational qualifications which can genuinely act as an enabler of social mobility.
I haven’t written very much recently about 6.00x, because other than the lectures posted for weeks 10 and 11 (which were excellent as usual) there’s been no measurable progress to report.
At the time of the second midterm exam the course team announced that it was going to drop one of the problem sets for this presentation, with the final two problem sets (9 and 10) due to be released on 12th December and 19th December respectively.
However, it’s now 18th December and no finger exercises for weeks 10 and 11 have been issued and nor is there any sign of problem set 9. Worse, there’s been no official communication from the course team about the absence of exercises and problem sets, with the last course-wide message posted covering the breakdown of scores from the second midterm exam. There have been a couple of staff responses published in answer to predominantly polite and constructive questions posted on the course forum, with one reply from a staff member saying that the reason for the lack of announcements was due to the difficulty of posting such information on the edX platform! Further information provided in another answer suggested that despite the rapid increase in numbers of people working on edX as a whole, there was only one person working on publishing the problem sets for 6.00x and getting the automated graders to work.
This state of affairs is a massive and negative contrast to my experience of 6.002x run earlier on this year. Lecture materials and labs were consistently published around 2 weeks ahead of the schedule, allowing the type of learner that online courses are aimed at to plan ahead around family and work commitments. I can’t remember there being any significant problems or outages with the problem graders on 6.002x either.
Perhaps the reason for the current issues with 6.00x is that the concept of edX is simply failing to scale. By that, I don’t mean that the computing platform they’re using is unable to scale – quite the opposite, with around 7,700 students having tackled at least one question on the midterm 2 exam. Rather, this experience appears to suggest that the idea itself is not capable of scaling under the auspices of a single organisation trying to run multiple courses simultaneously, all of which were originally designed for traditional (rather than online) methods of delivery. It’s also been apparent that the presence of the originator of MITx and edX, Anant Agarwal, which was so obvious during the first run of 6.002x, has had no equivalent on 6.00x. From my perspective as a student it feels that the team behind 6.00x has struggled to deliver a smooth learning experience because the effort required in course conversion and leadership had been somewhat underestimated.
It’s all very frustrating as what has been a very interesting course has been soured by these issues. edX, despite all of the goodwill surrounding it has failed (so far) to deliver 6.00x to a standard that would persuade me to try another course from them in the near future – free or otherwise.
With Midterm 2 safely over, I can now reveal that I have just gained enough marks to pass the course – with a few more weeks of lectures and finger exercises, two more problem sets and the final exam still to go. That’s rather pleasing, so I hope no-one minds me sharing my progress chart below:
This time the midterm exam consisted of eight questions and was marked out of 96. I ended up with 92 – having reversed the answers for the EDrunk and PhotoDrunk random walks on question 6, losing 4 marks in the process. All in all though, it seemed easier than the first Midterm exam, which may just be a case of having had more practice with Python now I suppose, but I did manage to complete it in under 4 hours from start to finish. I would have been faster, but I took the test on Friday evening, so that necessitated a substantial break in the middle for dinner and wine …
As with the first Midterm, questions 1 and 2 were on definitions and this time I managed to think them all through *before* hitting the submit button. Question 3 was on statistical distributions and identifying which of a number of graphs belonged to particular datasets generated by a piece of code. Tricky, as the rules stated that you couldn’t simply run the code provided through the IDLE interpreter. I resisted that (huge) temptation and managed to identify them correctly.
Question 4 was on classes – with the final two parts of the question being the first two pieces of code that we were asked to write. These were a small change to the __lt__ method of a class and the creation of a generator function. Question 5 was another pencil and paper exercise on a fragment of Python code and again managed to resist the temptation to take the easy way out and use IDLE.
Finally, questions 7 and 8 both required more code to be written. Question 7 was a Monte Carlo simulation followed by the plotting of a histogram using pylab. The second part of this caused me a small problem, as it wasn’t clear that only the plotQuizzes() function needed to be submitted to the grader and not the generateScores() function which also needed to be written to support it! The error message I received on my first failed attempt hinted that this was the problem (something along the lines of not being allowed to use a random function call) and it went through on my second attempt. It’s clear from the 12 hours extension given for just this part of the exam that others had problems too (and also found other issues with the grader). Question 8 was on probability, with a fairly simple piece of code to write. Well it was simple, provided that you could solve the first part of the question, for which only a single attempt was allowed! Without knowing the correct answer (or being able to subsequently guess it), the coding would have been rather difficult.
Tonight I’ve finished the week 10 lectures – for which there currently appears to be no finger exercise posted. Past experience suggests that some may appear between now and Wednesday so I’ll keep looking in the meantime, just in case.
It’s been a while since I wrote about my progress on edX 6.00x. However, the course has been fairly uneventful until a couple of evenings ago when I completed problem set 8 (which was part of week 9, due to the US Thanksgiving holiday). It’s a little more stressful now, as I’m trying desperately to work out where I might manage to fit in a few hours before 0459 GMT on Monday morning to take the second Midterm exam.
Week 8 consisted of a single lecture sequence and a few finger exercises on sampling and probabilities, hashing algorithms and Monte Carlo methods. All pretty straightforward fare, especially with no problem set to solve. Week 9 started with a lecture sequence on statistical thinking covering variances and standard deviations, followed by a sequence on distributions. So far, all of the statistics covered on the course have been descriptive and despite the original syllabus suggesting that some inferential statistics would be covered, there’s been no sign of them … yet! All in all, week 9 has been a reminder of some of the statistical concepts covered during my OU psychology courses. The difference this time has been that I’ve had to write my own code to calculate things rather than relying on an application like SPSS.
I have a confession to make about problem set 8 – which involved using Python and Pylab to implement a stochastic simulation of patient and virus population dynamics. It took me 12 of the allowed 30 submissions before I managed to get the big emerald tick for my ResistantVirus class. A large number of those resubmissions were down to user error. These went something like:
“Hmm. I have test case 7 wrong. Let’s look at my code. Oh – I see the problem. Hack hack hack hack … ok, select the class to copy from the IDLE window, control C. Over to edX browser window. Ah. my old code is still in the grader. Control A, Control C … whoops, no Control X. Time to post my new code in. Control V. Grade. What?!??! Still wrong? Ok, let’s tweak the code again …”
Anyway, I eventually got there – but I was right a couple of hours earlier on than I’d realised. I will need to be far more careful on Midterm 2 …
After completing week 6, where the fundamentals of OOP were brain-dumped in half a dozen short videos, week 7 apparently changed tack. The first lecture sequence briefly introduced the NumPy and SciPy libraries and was followed by a lecture sequence on simulations and random walks.
I say that the course apparently changed tack as although problem set 7 was a simulation task, being successful relied heavily on applying the OOP concepts taught in week 6. I suspect that this will be true for the rest of the course. Although it may have seemed to some students (judging by the reaction on the forums) that spending more time on the introduction to OOP was needed, in reality I suspect that we’re going to get lots more practice as the course progresses.
I must be starting to feel more comfortable with the idea myself, as dealing with the OOP aspects of this week’s problem set didn’t seem to intrude on the main task, which was to create a couple of different classes of robots to clean tiles in a rectangular room using slightly different strategies. The ‘StandardRobot’ used a strategy of randomly changing direction when it was about to hit a wall, with the ‘RandomWalkRobot’ using a strategy of randomly changing direction before every move.
Probably the most interesting part of the task was deciding how best to represent the room and the tiles (cleaned or not). The discussion forums have seen students use two-dimensional lists of clean and dirty tiles, dictionaries, sets, multiple lists … all kinds of inventive solutions. Initially, I chose to record the position of each tile as it was cleaned in an unordered list, iterating through it each time a robot landed on the next tile and adding that tile if it hadn’t already been cleaned. It worked (and got me through the grader) – but it was only when I started playing with the solution on my Raspberry Pi that I realised how awful the representation I’d chosen was. It’s not a good representation as it takes longer to check the list as more tiles are cleaned – at the same time that it becomes less likely for the robot to land on a tile that hasn’t already been cleaned.
An obvious alternative that I’m sure is better would be to reverse the logic of data structure so that the position of the dirty (all) tiles are initially stored and then removed as they are cleaned. However, in the end I settled for using a list of length room height* room width which simply contained True or False values representing clean and dirty tiles respectively. As each tile is cleaned, the position (y value*room width) + x value is set True (and if it wasn’t already True, the number of cleaned tiles in the room is incremented).
The results of the change were impressive:
Execution times for 1,000 trials, all tiles cleaned, speed 1, using a single StandardRobot on an Intel 2.5GHz x86 Family 6 Model 23 running Windows XP
Original data structure
Revised data structure
Execution times for 1,000 trials, all tiles cleaned, speed 1, using a single StandardRobot on a 256Mb Raspberry Pi Model B running Raspbian
Original data structure
Revised data structure
Unfortunately, the automated graders used by 6.00x don’t give marks for style or efficiency(*) so I haven’t bothered to resubmit my answer. I suspect that on the actual 6.00 course run at MIT, marks are awarded for such considerations. They certainly were on my computing degree in the early 1980s. This therefore illustrates one of the limitations inherent in this type of online course. And sometimes even the efficiency with which the result is obtained isn’t a good predictor of elegance. I suspect that the most efficient answer to the 6,9,20 problem from Midterm 1 was the one which simply returned a True or False value for the function depending on whether the value was in a hard-coded list. Clever perhaps, but not a general solution to the three box problem!
(*) Well, provided that the answer you give produces the correct result against the 6.00x test suites in under 30 seconds of execution time I believe.
Whew. Suddenly, after five relatively straightforward weeks, 6.00x has kicked up into a higher gear. I’ve just got to the end of this week’s lectures, finger exercises and problem set and it’s been far more taxing than anything, including the midterm exam, that preceded it. The main theme of the week has been an introduction to object-oriented programming, with various concepts (exceptions, classes, instances, inheritance and so on) being used for the first time.
This is the part of the course, while not being totally new to me, is the part that I’m least familiar with. All of the ‘production’ code that I ever cut in my career was definitely not object-oriented – and what little OO code I have created has been for the purposes of demonstrating other software packages, rather than being something coded to form an integral part of such a package. There’s a difference – as if your code doesn’t need to go into production you start to get a little sloppy about things – and the edX grader definitely won’t let you get away with that!
This week’s problem set has involved writing a number of classes to complete a program which selects and displays content from rss feeds if particular trigger words or combinations of them appear in its configuration file. There was definitely some subtlety required to complete the task successfully and the very final part of it took me ages because I’d made a silly error. However, I appeared to be in good company, as at least two other people on the edX forum had made exactly the same error. (Hint: if you end up with the error message: ‘str’ object has no attribute ‘evaluate’, for the final part of problem set 6, have a look at what you’re passing to the boolean triggers. It should be the actual object from the triggerMap dictionary, not its constructor).
At the time of writing (Sunday afternoon), neither of the graders for the penultimate and final part of problem set 6 are up and running, which is a little frustrating. Adding to the frustration this week has also been the bug in the problem set that becomes apparent when doing filtering (the code is expecting methods like get_guid() instead of getGuid() as was required by the grader earlier on) – but this is simple to fix of course.
There was also a documentation issue in one of the earlier finger exercises. In it, the grader tests whether your isPrime() function can handle an input value of 0, whereas the problem definition says that the function only needs to consider numbers greater than or equal to 1. Fortunately, the diagnostic output from the grader is very useful in ironing out such wrinkles. Being able to debug other people’s code and documentation is an important skill for any programmer to pick up!
However these minor problems, grader outages, the availability of only the current week’s material and the push back of the release date for each week’s material from Mondays to Wednesdays all give an impression that the 6.00x course team are a little overstretched – much more so than was apparent on 6.002x earlier on this year. I don’t think anyone on the course this time around minds being a guinea pig (after all, the content is excellent and it’s free to participate), but it’s clearly something that will need addressing in future if the plan to charge for completion certificates is to come to fruition.
After all, one of the benefits of online or distance education should be that the material is available for you to work on when you have the time to study it. Getting ahead of the timetable was something I always tried to do on my OU courses, as you never know when real life is going to get in the way. And for me, I think real life is just about to do exactly that. But for the moment, I’m just waiting impatiently for week 7 to start.
Update 12th November
The graders for the final two parts of problem set 6 are now up and running. But guess what. Despite the docstring for makeTrigger() being explicit that it returns a value of None, it won’t pass the grader unless you return triggerMap[name]. Sighs again.
With the deadline for the submission of the first midterm exam safely past, I’m now able to reflect on my performance here. 95/100 is respectable enough – although having dropped 4 out of 8 marks on the very first question, it wasn’t looking too promising early on Saturday morning. Note to self: read the question, read the question again and read the question again before submitting the answer for checking!
Fortunately I only lost one further mark (out of the 20 available for the sorting and complexity question) and managed all of the questions which required Python programs to be submitted (5 of the 8 questions on the paper) without any real difficulty at all – even though the final part of question 8 was a bit of a teaser. In real life, I think the way to have fixed the bug would have been to have thrown the code away and started again – with a solution that looked more like the second part of the question – rather than tweaking the rather strange code that was provided. But I guess the point of the exercise was to demonstrate an understanding of variable scope and how to pass functions as arguments to functions, rather than good coding style.
There was one question for which I’m particularly pleased with my answer, partly because even though I’d not come across the puzzle before I managed to figure out an elegant and recursive answer in around 3 minutes!
The problem asked for a boolean function to determine if an arbitrary number of food items (I refuse to advertise the brand) could be packed exactly into boxes holding 6,9 and 20. For example, 21 should return True (9+6+6), whereas 7 will return False.
Recognizing that the problem is easy to solve recursively (i.e. for quantities of 20) led me to this solution outline very quickly:
If the quantity requested is less than 6, return False
else if the quantity requested is divisible by 3 with no remainder or the quantity requested is divisible by 20 with no remainder, return True
else return the value of this function for the quantity requested – 20
After I’d got to this answer I did try to think about how I might create an iterative version, but nothing I tried seemed to be particularly elegant. Searching the web after the exam had closed revealed a number of different iterative solutions to this and similar problems, but in this case, a recursive answer definitely seems to be both easier to understand and to program.
The week 6 material should be out later on today and we’re just about to get into the part of the course that wouldn’t have been taught when I was taking High Level Programming ‘A’ using Pascal at Warwick University in 1982 – object orientation.
Looking at searching and sorting algorithms is a logical way to follow on from an initial discussion of algorithmic complexity, so I wasn’t completely surprised when the material for this week addressed precisely those two topics. Linear searching, selection sort and merge sort algorithms were covered along with a discussion of how to calculate their complexity. There was also some light relief as the usual “talking head” videos were supplemented by a rather quirky video on how to amortize the cost of algorithms.
If I ever need to figure out the costs of storing and moving puppies I now know where to go … and no, that’s not me in the video, even if he is called Tim. One of us is definitely far better looking than the other …
I completed week 5 earlier on this week, so I’ve been working on the first midterm exam today. The honour code forbids students from talking about this until it has closed for everyone (just before 0500 GMT on Monday) so I’ll leave my reflections on it until a later post.
John Spurling in his introduction to J.G. Farrell’s unfinished novel The Hill Station, reflects that the feelings of sadness and disappointment he felt when he first read the incomplete manuscript were inevitable, and that:
The only antidote is to be warned in advance and to enjoy what there is of the journey without expecting any proper destination
This week on edX 6.00x has been a little like that I suppose. The problems with PSET4 (the edX word game – similar to Scrabble or the popular Channel 4 TV show “Countdown”) I mentioned in my earlier post were eventually partially resolved, but it meant that the final two parts were left ungraded as (quite rightly) the course team decided that too much time had elapsed this week to give everyone a fair chance at solving them in a way acceptable to the automated grader. However, the final two parts of the problem set were posted and I did complete them.
While I’ve been developing my solutions using my laptop (as that’s what I have with me during the week), I’ve also been running them on my Raspberry Pi at the weekends – as I’m determined to get my £30’s worth out of the device! While it’s not exactly the same distribution of Python (I think), the version number is the same (2.7.3) and everything that I’ve developed on my Windows laptop has worked properly so far – as you’d expect.
If I have chance tonight, the one niggling end left undone is the algorithm I used for compChooseWord. It’s very definitely a ‘brute force’ version, with complexity O(len(wordList)), which I know I can improve dramatically!
Which leads neatly into the topics of this week’s lectures – debugging and computational complexity. There was nothing in the content that was unfamiliar to me, but both topics have provoked some interesting discussion in the forums. The definitions of algorithmic complexity have produced some good observations on both facilities within Python for looking at the cost of running particular sections of code (“import time” seems like a good starting point to me) and whether or not specific operations (such as addition, multiplication, assignment and comparisons) really do all cost the same (they don’t, but for the purposes of analysing the general complexity characteristics of algorithms it is of course possible to ignore such minutiae).
Week 5 is due to cover memory storage, sorting algorithms and hashing. With midterm 1 also due to be completed during this week, some careful time management will be needed.