EFF sues Proctorio on behalf of student falsely DMCA'd
eff.orgAll: this thread discusses two distinct lawsuits. Originally there were two threads, but I merged them (see https://news.ycombinator.com/item?id=26900217). Sorry! Now they're blended and there's not enough energy to reverse the entropy, so you'll need to track which is which as you read the thread.
One is discussed in the OP. The other URL was https://twitter.com/Linkletter/status/1385004344903290883, but that doesn't give any background. There's more here:
https://www.theverge.com/2020/10/22/21526792/proctorio-onlin...
https://www.eff.org/deeplinks/2021/02/student-surveillance-v...
Edit: It turns out there have been quite a few previous threads too. Pointers to those at https://news.ycombinator.com/item?id=26907558.
In case you don't know who Proctorio is:
(like me)
Apparently it's a supervision software that students are forced to install on their private computer and (as expected) it'll do its worst to invade your privacy and flag "suspicious" things, based on which the university might punish you.
"Suspicious" here means wearing glasses [3] or looking around in the room or blinking too much [4] or having eye and/or skin colors [1] that are difficult for AI to track or reading questions out aloud [2]. Because everyone knows that a good student is white, sits in a bright room, and will continuously stare at his/her PC screen while thinking about a difficult math problem, I guess. WTF?
I am so glad that this kind of abuse was not yet common when I was in university. I love sitting in the (dark) basement, it helps me concentrate. And I tend to close my eyes a lot because it helps me visualize the problem. I'm sure this kind of misguided software would have failed me.
And the worst part is: Bugs in this software will fail students in the real world. [4]
So it is crucially important that this type of software receives a lot of scrutiny to make sure it works as planned. But it seems that Proctorio is suing this guy for doing exactly that: Documenting how the software is supposed to work by linking to Proctorio's YouTube videos.
[1] https://twitter.com/uhreeb/status/1303139738065481728
[2] https://www.insider.com/viral-tiktok-student-fails-exam-afte...
[3] https://proctorio.com/frequently-asked-questions
[4] https://www.reddit.com/r/UBC/comments/g2ub05/god_kicked_out_...
For what it's worth, at the school this happened at, UBC, Proctorio is now effectively banned, along with other similar "algorithmic" proctoring tools, and exams are no longer allowed to use it with some exceptions.
They've moved to Lockdown Browser without the recording, and to Zoom proctoring. In my opinion, neither are particularly effective measures against cheating, and I'm sure they are trivially bypassable.
The effect of these tools being phased out is that exams now must be harder or less student friendly. Typical practice in some of my courses has been to not allow students to go back to answered questions, while giving large amounts of questions with insufficient time. The exams are scaled, but I can imagine people doing worse in this kind of stress.
This is silly. Make the exams open book and then the only thing you really need to worry about is the correct student taking the exam. If you're testing based on memorization of things in this day in age, then the course is useless.
I teach at the university level and this is what I have done with my exams. Everything is open book and in my experience there is no difference in the average exam score between open book online and closed (or open) book in person. It is also way easier for me to not have to worry about who is cheating and where everyone is looking (I also don't like the idea of forcing students to turn on their webcams).
I finished up the coursework for my masters degree at the start of the pandemic. My university was quite flexible for how instructors would examine us, given how sudden everything had to change.
One of my courses, which only had about 8 students and two instructors, decided to do an oral examination, which ended up being basically a very in-depth, one on one conversation about the course material and based on the expectations set in the syllabus (so, no surprises).
While obviously not practical for large rosters, this was by far the best exam format that I have ever done in my many, many years of schooling. I'm sure not everyone would prefer it, but the students unanimously agreed to try it (wouldn't have done it that way otherwise), and it was just so great. It was not at all like an oral thesis defense, which was what I was a little worried about.
When I was an engineering student at the university of Naples, all my courses were examined both with a written exam and an oral one. No exception, no matter how many students. It was hard for us and the teachers but, boy, you had to really study that stuff! Since then I've become an academic myself and have been teaching in several countries. I have never found the same level of rigor in any place I've been.
I remember friends in the '80s studying law in Turin always having orals - as you say "they had to know their stuff"!
Italian universities get many things wrong, but in terms of quality of graduates I reckon they used to be really good. The downside was having a high number of drop-outs (of which I am one). They've gone through umpteen "reforms" in the last few decades so I don't know how they do these days.
This is how exams are in Russian universities. You walk in. The table has a number of small paper cards on it face down with topics the course covered. You pick one at random, flip it over, and that is the question you need to answer. Since you do not know up front which you'll pick, you need to know all the material. Since you only need to answer one question, professor time is saved and exam throughput can be quite high.
Professors are also given quite a lot of flexibility in their grading. My mother had a fun story about a professor she had in college - a professor of a really hard math class who wanted to save on exam time. He announced "exam will be hard. Anyone willing to settle for a D, bring your report cards forward, I will mark them D and you can leave. No exam.". Some people came forward, got their Ds marked, and left. Once the door closed, he said "Anyone willing to accept a C, please come forward". Some did. After the door closed there, he announced to the remaining smiling students expecting easy As/Bs: "I'll see you all for the exam tomorrow 8 am".
No way this could happen in USA.
> My mother had a fun story about a professor
I've heard that story many decades ago in the form of a joke. It may have started from a professor who genuinely didn't care about failing students but did care about identifying the best.
> decided to do an oral examination,
This really is the best case, but as you note it was 8 students so quite manageable.
It requires a little skill on the part of the examiner, but you can quickly find out how much material the student knows with much higher accuracy than other exam formats, in my opinion.
One of the skills needed is to be able to make it conversational-feeling and reduce the anxiety of students. You can often tell when a student mostly knows what is going on but has misstated or misremembered something, and guide them around the place they got stuck.
Orals have a lot of advantages, but they also make it very easy for unconscious bias to come into play, in that all the criteria for grading are soft.
Good point, this is also one of the aspects of skill. There are techniques you can use effectively to mitigate this.
One unfortunate thing is that poorly done, orals can be very uneven.
There already is unconscious bias. You can see the student's name, their penmanship, their writing style, you likely know who they are, etc. An oral exam just changes things by changing the bias to accent, inflection, annunciation, skin tone, dress, etc.
When I was in school, 95% of grading was blind and nothing outside of exams was handwritten.
And while it was possible to de-anonymise, the academics were all in support of blind grading so why would they?
The only exceptions were projects where everyone was assigned a different topic, and graded presentations.
Of course, this is simpler in STEM subjects - it's not like you can guess someone's race or gender from their switching power supply design. Subjects that prize in-class participation and lengthy essays would probably be harder to blind-grade effectively.
This also depends quite a bit on class size. If you are one of 9 profs and 36 TA's on a whole year of a 1st year intro course, you can get together and batch mark finals very effectively blind.
If you are teaching a 4th yr/masters mixed class of 11 by yourself, you pretty much get to know who is who whether you want to or not. I suppose avoiding handwriting can help if it's appropriate (e.g., won't work on a math course) but I suspect you'll know everyones style by then anyway.
The examiner doesn't even have to use the oral exam to give a grade. They could use that part simply to figure out whether the student passes or fails. It's very difficult to cheat in an oral exam. Combine the oral exam with the written exam and you could get an overview of what the student knows.
This privileges confident speakers. (The same way written tests privilege confident readers, and standardized written tests privilege those who have the time and resources to study the standard.)
I'm not sure about that.
It certainly could, but it also certainly couldn't. I imagine being confidently incorrect is likely to produce a worse result than being unconfidently(?) incorrect, for example.
Similarly, a less confident speaker may end up spending more effort justifying their answers, which could better expose their knowledge.
I think it would depend quite a bit on the examiner in this case. Some people may even be simply biased against particularly confident speakers, particularly considering the relative positions of the speaker and examiner.
It doesn't if the examiner understands the material. You can't bullshit someone who knows much more about a topic than you do - bullshitting with confidence will only make you sound like a fool.
If the examiner isn't much more knowledgeable about a topic than their students, then something else has gone wrong.
But you can appear to know less than you actually do through a lack of confidence.
Right, but that's why it takes a bit skill on the part of the examiner - you need to be able to support people through their nerves and lack of confidence.
Not much if you do it well. Confident and wrong won’t get you far, either ...
The real world also privileges confident speakers.
Schools should be teaching kids to be confident speakers and readers from an early age.
If teaching worked 100% we wouldn't need exams.
> and it was just so great.
I'm delighted to hear that it went so well, and I am a believer in the idea. I have seen, from time to time, oral thesis defenses become rather tense and difficult, and think that things go better in proportion to the preparation of both student and examiner. Any general observations about what worked, for those contemplating giving exams in this fashion?
Probably the most important thing that made it a good process was that, although I was certainly under pressure to perform and show that I knew and could explain the material, the format was as a thoughtful, two-way, deeply-engaging conversation rather than a grilling, one-sided examination.
The examiner (one of my two instructors) had a list of questions/topics that we had to get through, but the specifics and flow were natural and spontaneous rather than artificial and forced.
What made the conversation good was that the examiner discussed points that I raised, raised points that I didn't, asked my opinion on e.g. real world implications of theories or conclusions that could be drawn if this or that were true, et cetera. This made it two-way and engaging. While he did not give me any answers, of course, when it was clear that I could bring up and sufficiently discuss a topic, then he would go into details, which would trigger even more detailed responses from me, and so forth. In this way I think he was able to probe the depths of my understanding while not needing to employ a one-sided question/answer format.
I think that it is difficult to bullshit a topic in depth with someone else who knows what they are talking about, so an oral exam probably does not need to be a hardcore opposition like a thesis defense might be.
The allotted time was maybe 30 or 45 minutes (I don't quite recall), after which the examiner would tell you what grade they would recommend. If it was a grade with distinction, then of course you would say thanks and be done. If it was less than distinction then you could request another 10 or 15 minutes of further examination to try to show your mastery. (I passed with distinction, so I didn't go through that part, but I assume it would have been a continuation? But maybe it would get a little more intense if you were trying to improve your performance at the last minute? I don't know.)
My experience with the oral exams have been the opposite. The profs ask useless questions on details that do not matter at all and are never covered in a real exam. They are also very biased in how they assess you based on their preconceptions. And it doesn't give you the time and privacy needed to solve real technical questions. (Most of my written exams were only 4-5 questions in 3 hours, which is obviously not happening in an oral exam.)
In my alma mater for most courses we had to actually discuss what we wrote(and maybe some other topics) with the lector as part of the exam. Some were open book, some were closed. Regardless of that however when you get to talk to the professor it's very hard to fake knowledge and especially logical thinking. It sure takes more time(we usually had one on one for about 20-30 minutes per student), but you can size a person way better than just reviewing what he wrote/selected. And there is no obstacle(other than time constraints, which can be worked around) to continue doing this even during the pandemic. I think we are overestimating technology such as AI and its applications. Some things are just plain better when done by a person. I had a very positive experience with one professor who chose to round up my grade instead of down, because my errors were technical(carelessness mostly) not logical, so he then asked me questions from subsequent chapters to see my reasoning. I really didn't care that much about the grade, but that discussion and the overall experience of failures that could lead to positive change through introspective analysis is something that you can't get from an AI or any pipeline processing.
Thank you! I'm sure it takes more work to make an open book exam, but it's definitely to the benefit of your students.
If you can't see them, doesn't that open up the possibility of students taking the exam together?
I give regular online exams without any protoring software, pure honor system. I have three students who live together who consistently turn in identical work. What I don't understand is why the keep doing it, when I keep giving them 0s for copying.
How did you find out they live together?
Not every university / university class is big. Most likely explanation is that they simply said they live together.
Ah, that'd make sense. I would just think that they wouldn't mention it if they planned on cheating lol.
Back when I worked first line at a University, I had access to the student records system (so that I could verify students if they required a password reset).
I was able to see current and past residence data, which I suppose could have helped me verify the student's identity ("what post code[0] do you currently live at?"), but I tried not to look at that information when possible.
Lecturers also had access to this system (it dealt with some submission data), so I guess they may have had access to view residency data?
This is just an educated guess based on my experience, though.
[0]: For American readers, think ZIP code
Uncontrollable stupidity apparently. Where I went to school you would have been called to the Dean on the first event and perhaps out on academic probation if they didn't like your answers. Second time would have been academic suspension
Crazy stuff.
Presumably they have some kind of recourse as to their grades.
Our university physics teacher allowed us to bring one-page self-made cheat sheet. By the time you compressed the essence of a whole course into a one-pager, you were pretty much done learning. :))
When I was involved in the teaching of a course that allowed this, one student basically reduced all the course notes onto a single two-sided page, except they also had flaps on it to fit more in. The exam invigilator took it from them deeming it (correctly imo) to not be in the spirit of the allowance. We decided we'd mark their exam, and then determine what we should do about this cheating-cheat-sheet. Given they got something like 5% in the exam anyway, we decided to let it slide.
My own experience of making up those cheat sheets is the same as yours, the effort put in to create them is an excellent teaching tool in itself, as you're studying and distilling all the stuff you don't know well, and by the time you're done, you hardly have to refer to it in the exam anyway.
For physics, they'd just give us a page with all the formulas but with no context aside from the letters in them, so if you knew roughly what it should look like, you could find it and you're not at so much risk of making a basic error under pressure.
I still have a couple of the cheat sheets my study group compiled -- they're so fun to look back on. One for Computer Organization is almost unintelligible to me now; we were so proud of the density of that sucker.
did your instructor also allow a desktop microscope?
One of the more creative tricks I remember was writing a cheatsheet in red and blue text overlaid on each other and using 3D glasses to read it, which allowed you to squeeze more on. Never actually did it, so I don't know if it was useful...
This was common in my engineering school too. I enjoyed creating the cheat-sheets more than any other form of studying, because what better way of learning three chapters of thermodynamics than having to distill them into 10 lines of text on a notecard.
I always permitted/encouraged this when I taught (I'm retired now). Every once in a while, we'd find some enterprising student selling pre-made cheat sheets, which were of course of little value to the purchaser.
I can imagine that! In my university, we insisted on having the sheets be handwritten, which (at least) acted as a dampner.
At one point in college, I printed "Don't Panic" in large letters on one of these one page cheat sheets.
This raised eyebrows of the TA's proctoring the exam, but like, if you don't know the stuff, a page isn't going to help you at that point.
I recall at some point that I started writing that type of cheat sheet, even if I didn't get to refer to it in the exam, for that reason.
My mom is a college professor, among other things, and her exams are always open book. She has every expectation of the students to have the book in front of them when taking the test.
She's a fully online professor, though, so maybe that's the difference? Different dynamic from the traditional, in-person classes professors?
I also don't think anything but her quizzes are multiple choice? Everything else is essay format, participation (discussion posts in class forum), and a HUGE paper, so there's really nothing to cheat on, aside from plagiarism, and ... there are ways of catching that.
This. Our engineering exams were all open book and tested understanding rather than rote memorization. If you didn't understand the material well there was no amount of googling that could help you.
I would add some nuance to this view. For nearly every undergraduate course I agree - the ability of knowing where to look for resources in a particular subject matter, synthesizing the available information to solve a reasonably novel problem, and presenting the information in a coherent way is a skill/art that should be one of the primary goals of the education.
For advanced degrees, though, I'm not so sure. I expect someone which a masters or Ph.D. to actually be an expert in the subject matter, not just someone who is really good at figuring out how to solve problems. A big part of that is being able to internalize the information so well they you in effect become a resource that could be used by an undergraduate student. This internalization goes beyond rote memorization, but memorization is a big part of it too.
Just a disclaimer, though - I do not have an advanced degree, so maybe I expect too much from those who do? A big reason why I have no interest in pursuing one is that many people with a masters degree I find have little expertise to show for it, they could have just as easily learned the same information by self-study or being fortunate enough to find interesting work. (the hiring landscape is a separate topic)
Just as a counter example. I have 2 "advanced" degrees - in engineering and in management (btw I did NOT do them because I was smart - but rather because I was curious and had a lot of time on my hand. I definitely struggled in both of them - though really really enjoyed them both). I am not what you call a model test-taker. I got barely passing marks in all my exams and yet my "assignments" I was consistently a high scorer in (sadly assignments at the time only accounted for atmost 20% of your grade with 80% being what you can cram-and-dump in an exam).
I still cannot "remember" how to perform a discrete-wavelet-transform from memory (my honor's thesis) but i found myself digging into it a couple of months ago (just fiddling on pet projects). An hour on googling got me on the track. Point is there is so much even those with advanced degrees have to know and having to retain it all in memory all the time is both infeasible and wasteful. Yet expecting kids/young adults to do this is truly hypocritical and disingenuous.
I'm a physics grad student, and this is completely wrong. The purpose of graduate classes (at least in physics) is to teach you how to navigate a the standard reference texts. For example, the main thing I learned in my grad E&M class was where to find information in Jackson (aka "Classical Electrodynamics"). Classical mechanics was a mix of Goldsmith and Landau and Lifshitz. Quantum mechanics used Sakauri. And statistical mechanics used Pathria. Also, referring to these classic texts by author is very common, much moreso than by title.
Valuable comment right here for the autodidact, thank you!
My experience with people who are PhD is that they know things you expect to memorize just because they use it a lot. They learn formulas or whatever not because they spend 5 hours memorizing it, but because they needed it once, so they looked it up. The second time they needed it, they weren't 100% sure, so they looked it up again just in case. Every time they search for it, they need it less and less and at the end they know it by memory. That's how you learn most of the stuff these days and that should be for everything. Natural learning. Why memorizing things you dont need to know?If you use it often enough, you'll end up memorizing it. Same for concepts or any type of knowledge.
I can tell you the peak excitation wavelengths of a dozen fluorophores. Not because i wrote them on flashcards and memorised them, but because i spent four years in a darkened room sliding filter cubes around to take pictures of slides stained with them!
There's a term for this and several software packages to help exploit the effect.
1. PhDs don’t “do exams”. You can’t cheat your way to publishing original research (exceptions of course) and have external expert researchers review it and accepted on a reputable journal. Given that that’s a requisite for obtaining your degree, there’s no point on a final exam. Most defenses nowadays are partly ceremonial (exceptions of course). Both the material and your ability to do research has been checked months before by your advisors and graduation committee.
2. Master’s degree courses follow all kinds of schemas for examination. From oral exams, to in person no extra material allowed, to open book, and take home. Being on both sides (taking the exams and creating them) I can say that it really doesn’t matter if you allow people to take the exam home and collaborate among each other. These types of exams are designed to really test a deep understanding and ability of the material. There have been exam questions where an entire class of >20 students are not able to solve it. People that are really good are able to have a shot at it and maybe make some progress, and that differentiates the good from the exceptional.
3. Nowadays undergraduate degrees are a commodity so it feels they need to make sure only the good students get one. But in the end it doesn’t really matter, since most employers (exceptions of course) will want to see how much value the candidate add, which does not correlate with having a degree or good grades
You can’t cheat your way to publishing original research
Of course you can, making up results is a time-honored tradition!
It does take a certain amount of skill to do it believably, though.... ;-)
I felt as if exams in my masters (in Mathematics) were different: they mostly check you understand the definitions, maybe remember some basic tactics for approaching problems. You can't really check the students can solve "real" problems, as those are too complex for the limited scope of an exam.
When I have discussed this, most people I know argued the opposite (for Mathematics). The undergrad courses are supposed to lay the foundation. Stuff you need so often, and stuff you need to recognize when useful, you should know it by heart.
At graduate level, there is too much stuff. You know the outline, you know where to find stuff. But you don't need to know everything exactly. If you forgot one passage from a definition, that should not cause a failure. Because in 'real Mathematics' you get to look at references.
A nice trick for open book is to make it time-infeasible to just look up everything. But allow students access to the materials incase they have a brainfart.
It depends on the degree. If you're becoming a medical doctor you'll have to do a ton of memorization, but in other fields doctorates are the least memorization-intensive part of your education, because the focus is on generation and synthesis, on the assumption that you've already learned the rote knowledge. You don't need to memorize anything to write a dissertation but that doesn't make it easy (and you'll end up memorizing everything anyway).
> medical doctor you'll have to do a ton of memorisation
Which is a load of dingos kidneys.
Gate keeping.
All the good doctors I've ever been to have a wall of reference material and use it, even if only to show the patient, but it's there and accessible.
And besides, it's not like practising medical doctors don't make heaps of mistakes.
The number one cause of complications in a medical setting is medical intervention, so it could be argued doctors should be using more reference material and not relying on their over worked brains.
Anecdotes in support of this comment:
I lived with a orthopedic surgeon doing a specialization in pediatric orthopedia.
He was very very very good based on my understanding of the places he had worked, the offers he had to undertake his specialization, the change in operation waiting times and his model of how he worked within the hospital, and that immediately after returning to his home country he was head of the newly created orthopedic pediatrics ward.
His description of how he would approach a surgery was straight-forward:
- he would review the surgical referral, his notes from his patient intake
- he would review appropriate notes and documentation from the hospital on the allowed surgeries allowed, including reference texts on that particular surgery
- he would create a full plan for entering, performing, and exiting the operation
- (surgery)
- he would review the plan, add notes on how it actually went
So it's not like he woke up, walked into a surgical room, and ad-libbed it.
As an anecdote, he said he liked B-students for his surgical teams because they had to work hard to get there.
I also was good friends with a very nerdy emergency room doctor. He offered the anecdote that he was "called out" for using Google. His justification was that "once you throw in the appropriate medical terms, all you're getting is the latest research on treatments--and I was attempting to offer the best and latest in proven care". He also made the several changes to the shifts at the ER, including using time-sheet and patient data to show that patient outcomes suffered during longer doctor shift rotations. His anecdote on his schooling; "I was a bad student, and I'm a poor doctor, but my job is to get you stable until we can intervene properly". He funded a startup and wrote (in Java) a medical EMR system.
But parent post is arguing that to become a doctor you need to memorize ungodly amount of things, which is absolutely true. And about that wall of reference material .. yeah, it's usually there to impress patients, most of those books were never touched :) (I'm from family with too many MDs)
It might be true, but I'm arguing it isn't necessarily the best.
We need more doctors, if that means changing the culture within the profession so they're more inclined to use those reference material that would be good thing.
This is why PhD's typically have comprehensive exams. It varies (a lot!) by university and even department, but panel led oral exams are pretty common. At some places they are even open to the public (no pressure). These are distinct from a thesis defence, which happens at the end of your degree; they are likely to happen about a year in and are meant to ensure you have a solid background.
Overall I think an oral exam run by a skilled examiner is the best of all worlds, but it isn't practical for a section of 500 undergraduate students. It's quite doable for a seminar or manageable up to say 10-15 students, becomes difficult after that.
As you say, different departments and institutions have different approaches, but I'd definitely emphasise that outside of Europe (where I've seen more ceremonial "exams"), there are still robust and vigorous "final" oral viva exams used.
In some European universities, the outcome is such a foregone conclusion that the candidate's family is laying out the buffet, peeling the cling film off the plates, and unboxing the champagne bottles as the "defense" begins.
In the UK however, I've never seen this. It's generally a small room with candidate, an internal examiner, an external examiner from another university who is an expert in the field, and a convenor to record the minutes of the examination. The outcome is by far from a foregone conclusion.
A good student who is an expert in their field, is well-read and up to date on their work and the surrounding literature will perform well and have little to fear. Someone who hasn't written their own thesis, or didn't really have an understanding of the area, and thus isn't really an expert, will have a very unpleasant time, and will likely be failed, or be sent away with major corrections to be completed to the satisfaction of the examiners, possibly including a full oral re-examination.
> This is why PhD's typically have comprehensive exams.
America is leaking again.
> America is leaking again.
Ok, from a terminology point of view that's fair - it isn't the same everywhere by any means.
However, most if not all of the graduate programs I know if internationally have something roughly equivalent, whether they are comps or prelims or qualifying or whatever.
The basic idea is that a department (and university, generally) has an interest in maintaining the quality of their programs, and one way to do that is to make sure that your students never leave with glaring holes in their background. The best way to do this is some sort of comprehensive evaluation, and the time to do it is at or near the beginning of a program - otherwise there is no time to address deficiencies.
No UK PhD programme has exams like this, that i have heard of.
Some UK programmes differ in that there isn't the same formal concept of "candidacy" like you see in other places. The end-of-first-year review is often a written report, sometimes with a "mini-viva".
The common factor among all UK PhDs I know of is that there is a rigorous viva at the end, where the outcome for the sudent is not a foregone conclusion. Despite the shorter overall duration of the PhD (~3 to 4 years typically), the oral examination can (rightly) cover material far beyond the scope of your thesis - if you are an expert in your field, you will be able to have a knowledgeable and informed discussion as a peer with your external examiner, who will be a recognised expert in the field. I'm definitely a big believer in the importance of being able to have a well-informed discussion around the area of your work, and actually found the whole viva process very enjoyable and cordial - a nice chat about the wider field, my and the examiners' own previous work, some debate of the merits of different approaches, and then onto a run-through of the thesis, chapter-by-chapter, skipping any pages where there were no points for discussion or contention.
Unlike European vivas though, there's no family or friends, no champagne corks being popped mid-defense, and no foregone conclusion of the outcome. I've been at European vivas with the family of the candidate preparing the celebratory buffet at the back of the auditorium while the questioning continues!
> Unlike European vivas though, there's no family or friends, no champagne corks being popped mid-defense, and no foregone conclusion of the outcome. I've been at European vivas with the family of the candidate preparing the celebratory buffet at the back of the auditorium while the questioning continues!
While the UK model is probably preferable in a lot of ways, this sounds much more enjoyable :)
Oh, good point, the UK programs I know of are notably ligher in this regard (and PhD shorter) though they do generally require a 1st class honors (honours, i guess!) degree in subject, which includes "tripos" which is roughly equivalent. So in some ways a higher bar for undergrad matched with a lower bar for grad.
At least that's the theory - I don't know if in practice it holds up; most of the grad students and later I knew from that system came from oxbridge which has a number of quirks.
The difference I tend to see is that the outcome of the UK viva (oral) examination is far less of a foregone conclusion, and can still be quite traditional in that students are expected to be able to have a broad and well-informed discussion about their wider field and the context of their work. The thinking is that they will (if meritous of a PhD) have a certain level of expertise, and thus be able to have a discussion with their external examiner (a distinguished and recognised expert) as a peer.
My experience of it was that if you are genuinely knowledgeable and approaching being an expert in your field, it is an enjoyable experience, and just like having a (longer than usual, but not uncomfortably long) conversation with someone about a topic that you both share a deep interest in. There is nothing to worry about, as you can have a nice discussion about an interesting topic, and share interesting ideas etc.
I'm not sure if it's a lower bar for grad as such - I think it's got fewer "formal" requirements, and far more informal requirements. The most common way to "fail" is to simply not submit the thesis.
I don't know anyone who failed their viva entirely, but it's standard to be asked to make corrections to the thesis after it. Getting passed without corrections is a significant achievement. Those corrections can relate to weaknesses in your general knowledge uncovered in the viva: if there is something the examiners think you should know about, and you don't, you can be asked to add it to the thesis.
My impression was that someone failing their viva was seen as a serious failure on the part of their supervisor, who should not have let them submit without being more prepared.
> UK viva (oral) examination is far less of a foregone conclusion,
That matches my limited experience with UK, although I've seen north american ones without a foregone conclusion - seems really dependent on institution.
> The most common way to "fail" is to simply not submit the thesis.
I think that's true everywhere, lot's of people just don't complete. I've heard somewhere that the "ABD" (all but dissertation) rate in US is nearly 50% at 10 years, if you go across all disciplines.
The US frequently merges the masters and PhD programs into a single 4-7 year program, rather than a 2-year masters followed by 2-5 year PhD program. The comprehensive exams for PhDs are typically at the end of the first two years, and cover material similar to that of a masters.
The two are pretty much equivalent, but with different names.
To quote my advisor, who was a faculty member both sides of the Atlantic, while the path is different, by the end of the PhD, you mostly done the same amount of work, regardless of which system you participated in.
Often the tests at this level are difficult enough that the need for memorization is implicit.
Open book is also fine in this case because without a base of memorized understanding an open book isn't that helpful anyway.
I never understood the concept of cheating.
If you don't know the answer to a question and you look it up, that's not cheating, it's research, or at least referring to reference material, which is always permitted in real life. Except for some contrived or rare circumstances someone will now point out. Even the Apollo 11 crew had support.
Any professional in any industry is allowed to say: "I don't know" and "I'll get back to you" and "let me look that up".
> Except for some contrived or rare circumstances someone will now point out.
I don't know about you, but I tend to want my pilots, surgeons, and lawyers to tend towards self-competent, and I don't think those circumstances are contrived or rare. It's not to say that they can't ask for help before or during flight/operation/trial -- and in fact asking for help and crew-resource management is important for those fields -- but not all fields are like software engineering or aerospace where there may be dozens or hundreds of people that can help you in a timely fashion if you have completely no idea what you're doing. I'd prefer pilots, surgeons, and lawyers that didn't cheat their way into their jobs.
A similar example is when proficiency in a language is necessary for study or work in another field. A university where all coursework is done in English, for example, needs for incoming students to be able to understand lectures and texts in real time. Tests like TOEFL or IELTS are reasonably good at measuring those skills when they are conducted in a proctored testing center. If test takers can use dictionaries, voice recognition software, machine translation, etc. during the test, then those real-time skills can no longer be measured accurately. Since last year, many universities have had to adopt other methods of evaluating applicants’ language skills, such as one-on-one interviews, which are more expensive and time-consuming.
That only works if the answer can be looked up.
Real life isn't always kind enough to only give problems that someone has already solved and published somewhere you can find.
The class should effectively teach you how to make that distinction in a given field/topic. If you didn't pay attention at all, you won't know what to look for or how to figure something out quickly enough to pass. Cheating would be most damaging if the actual answers were shared or similar attacks on the integrity of the test itself.
Knowing how to find or determine the current correct answers yourself is often more useful in the long run than memorizing what amounts to trivia. This is especially true when it comes to open ended and quickly changing fields...
That's only a helpful responds if you believe exams are testing for your ability to synthesis novelty, which seems absurd to me.
I mean synthesize, too late to edit.
Well, that's the problem. Someone can read out the question and receive results from someone else or be actively sharing their screen. It doesn't really matter what kind of test you have.
Honestly, any tests I took that were open book were long enough that if you were constantly looking things up you'd never finish. Same would occur if you were talking to someone and they were having to relay the info.
Part of the benefit of open book tests is that it brings up the "floor" of resource access, limiting the potential differential between honest and cheating students. Obviously, this could be negatively compensated for with other factors, like having more questions or stingy grading.
I think part of the idea is to discourage sharing of answers... But then you should build open book personnalized exams. But then the student might hire someone to answer for them. But then why bother everyone because of those cheating people...
>Make the exams open book and then the only thing you really need to worry about is the correct student taking the exam
This doesn't solve the problem of a student having an expert sitting off camera, feeding the student the answers.
Exactly - assessment methods that are difficult for the sake of being difficult benefit nobody. They’re only marginally more meaningful than FAANG interview questions.
Employers, students and society as a whole have all moved on; they want assessment to demonstrate that students can do what the course has taught them (known in the jargon as “alignment”), not memorise a bunch of facts that they can regurgitate on demand.
> the only thing you really need to worry about is the correct student taking the exam
Which is the hard problem these solutions try to address, because even if the correct student is sitting in front of the computer, that doesn't mean he isn't just proxying the questions and answers to someone else (possibly paid).
By far this is my favorite part of working in the fire & safety industry. Open book exams for certifications just show that you know how to apply the knowledge within the code books. All it requires is that I remember a handful of section numbers relevant to areas of the code.
Understanding always aid memories and memories are important to mastering anything.
Brute force memorization, however, are subject to decays and constant forgetting. It is astonishingly difficult to memorize a bunch of meaningless numbers unless you taken the time to do a mnemonic for it.
People can easily share the solutions to real exams, as well. (Though I agree the bar is much higher.)
I completely agree with you but good luck convincing anyone to do that. Memorization and regurgitation questions are easier to create, evaluate and grade. Asking meaningful questions, evaluating each student's answer and taking an individualized approach to each student requires a lot of time and effort which are the most scarce resources in today's mass education centers.
So they'll keep asking stupid questions as if the student was a database being queried. It's no wonder that people can solve them by querying actual knowledge bases! Schools won't accept that: they are selling degrees and they don't want to make it too easy for students to pass their courses. So they come up with the absolutely ridiculous notion of cheating.
The whole thing's so phony and artificial it's mindboggling. I don't think it's even possible to get a high quality education anymore. Maybe if a student is lucky enough to have a one-on-one mentorship.
I think zoom proctoring in combination with some increases in communication overhead are quite practical. We recently did this.
Giving students different exams (selections of slightly and subtly different tasks from a task pool), not showing task names, and mixing up the order of tasks for each student seems to work quite nicely.
The increased overhead to communicate which exact answer they need, finding out which other student has the exact same task, etc., has worked really well for us. Of course, you have to have it in a way where they do not have much time left over to shoulder this overhead.
Edit: So students are going from "What's the answer for 5" to sharing the topic of the task, the task description (there may be a negation hidden in there), and the constants and other students needing to compare. Maybe they aren't even at this task-type in their exam yet, as the order is mixed.
Edit2: We weren't really watching the video stream. It was just to discourage students actually sitting side-by-side, which would decrease the communication overhead drastically.
Currently a student where this is happening as well and can share a bit about it. In my classes (mostly CS and math), I never used Proctorio as a lot of my professors seem to share concerns ethically as well as a lot of buggy software creating false positives. This lawsuit has been happening for a few months, as well as the CEO of Proctorio leaking student chat logs and threatening the student after the student made a post complaining about the use of Proctorio, both of which were shared and highly upvoted on r/UBC (link below). Certain faculties relied on Proctorio a bit more, like in business, has shifted to more extreme measures, where the Professor requires the student to be logged into two Zoom accounts with camera on, one facing the students face/hands, and the other facing the students desk/screen from behind.
As for alternative options, most my classes used open book paired with more limited time restraint, possibilities of questioning students if anything looked suspicious immediately after the exam, and making the tests harder and scaling afterwards. Although there'll always be bad apples, I'm thankful many of my profs has adapted very well to help the students learn.
Student's post: https://www.reddit.com/r/UBC/comments/hgiiu1/midterm_started...
Proctorio CEO's comment (u/artfulhacker) got deleted back is on removeddit, Linkletter is also here defending the student as well: https://www.removeddit.com/r/UBC/comments/hgiiu1/midterm_sta... https://www.reddit.com/r/UBC/comments/hhbowl/proctorio_ceo_m...
Harder or less student friendly for who exactly? If you'd previously be failed due to the colour of your skin, removing these tools is a major boon to student friendliness
It's actually very disappointing that this whole monitoring system was deemed necessary to begin with. How screwed is the system that anyone one would believe this level of intrusion should be part of the education process.
cheating was absolutely rampant when I was in college getting my CS degree. and that was with tests taken in person with the professor in the room! I'm sure students are even more brazen when all assignments are done at home.
I think at the very least, all exams ought to be designed as "open book" with a time limit. that at least eliminates the issue of students using unauthorized resources. I'm not sure what can be done about students who collaborate on exams or go so far as to hire someone to take it for them. perhaps a few students could be randomly selected for each assessment to explain a few over their answers over zoom?
Indeed. I think people here are underestimating the volume of cheating in school.
It isn't a minority. It is an overwhelming majority if they think they can get away with it.
https://www.cleveland.com/metro/2017/02/cheating_in_college_...
https://www.cbc.ca/news/opinion/opinion-academic-cheating-1....
> I think people here are underestimating the volume of cheating in school.
Not defending Protorio here, but I didn't really understand how rampant cheating was, either, until I became friends with someone who works at a University.
Growing up, my friends and I would never dream of cheating and would fear the consequences of getting caught, so the entire concept feels foreign. We also grew up in the era of in-person classes, before cellular phones were common.
Some of the cheating stories I hear from my friends in University are mind-blowing. Everything from (college) students thinking the proctor won't notice them using their cellular phone during tests, to people trying to hire test-takers who show up with fake IDs. They had to start giving proctors photos of each student because checking IDs wasn't enough any more.
The situation is much more complicated than it sounds. As soon as one person gets away with cheating, it becomes a cheating arms race as even the good students feel the need to cheat to keep up. If an entire institution becomes notorious for rampant cheating, the value of every graduate's degree goes down.
Obviously the Proctorio solution is terrible, which is why I expect we'll see Universities push to return to in-person classes sooner rather than later while keeping remote as a 2nd class option.
We can't stress the importance of school and how it will affect the rest of their life, and then not expect them to do whatever it takes to make sure they do well there. For some, it might be a matter of making sure they have an excellent record, for others, it might a case of keeping them in the system at all if they didn't get there though merit, or feel like they can't keep up enough to stick around. I imagine the latter might by even more of an issue for large big name schools, of the type you often hear that one of the real reasons to go is the connections you make.
> We can't stress the importance of school and how it will affect the rest of their life, and then not expect them to do whatever it takes to make sure they do well there.
That's basically what happened here: The school expected students to cheat and looked for solutions to address it.
I think it's very important that universities have very significant consequences when students are caught cheating. Cheaters don't generally start by hiring people to write their papers and take tests for them. They test the waters with little cheats here and there, pushing the boundaries over time.
If Universities made examples out of students who were 100% confirmed cheating beyond reasonable doubt, the amount of cheating would decline significantly. Instead, we're stuck with this game of half-baked anti-cheating systems which some students approach as game.
> If Universities made examples out of students who were 100% confirmed cheating beyond reasonable doubt, the amount of cheating would decline significantly.
I don't think that actually follows. As I understand it, many universities and zero tolerance plagiarism and cheating policies. The problem is, when you have no middle ground and your policy is extreme, you leave yourself very few options in the cases where you don't want to or can't easily expel the student. If your policy is expulsion for cheating, then not doing so and providing a lesser punishment is seen as favoritism if the student has resources, or is used as ammunition for their own case by students with resources if offered to a deserving student otherwise.
I think the actual solution is well defined, less flexible, but not overly harsh punishments. Any student caught cheating or plagiarizing is not expelled, but immediately either fails the class or drops two letter grades (and a second time in the same class would be failing the class no matter what if the more lenient option was chosen initially).
With extreme consequences teachers are going to be hesitant to report small and borderline cases, because the consequences are so large. Make the consequences manageable for the single occurrences but problematic if they keep happening, and you'll correctly catch those serial cheaters and those occasional ones that are better off just taking the hit (and maybe dropping and retrying the class) instead of letting them slide because the punishment is disproportionate to the crime.
Think of it this way, if the crime for stealing a candy bar was life in prison, would you call attention to the person next to you that you just saw steal a candy bar? Does the store attendant actually call the police, or just take the candy bar back and tell the person to leave? What if the person is rich, and it actually causes you problems to turn them in, because it's worth it to them to make sure they don't suffer that major negative consequence be exerting their influence? If the punishment is seen as disproportionate to the crime, people will make their own decisions to avoid what they see as a problem with the system, and it also means that people with resources are more likely to exert those resources to avoid those problems, to the detriment of those around them (and they'll mostly get away with it, because who wants to die on the hill of making sure someone is punished for something so inconsequential?).
> If Universities made examples out of students who were 100% confirmed cheating beyond reasonable doubt, the amount of cheating would decline significantly. Instead, we're stuck with this game of half-baked anti-cheating systems which some students approach as game.
When I went to school they made a big deal out of the student ethics board, and if you were caught cheating you were brought before it. It was implied that you would be kicked out of school and your cheating conviction would be appended to any other school's request for your transcript.
I don't know how prevalent cheating was at the school.
> We can't stress the importance of school and how it will affect the rest of their life, and then not expect them to do whatever it takes to make sure they do well there.
Completely agree. For many students their lives and futures will literally be determined by the result of a single test. Under these circumstances, not using every resource available to win is simply stupid. Why would someone not cheat if possible?
The truth is academic integrity was doomed the second degrees became necessary for employment. People who care about academic integrity are those who are there by choice, not those who are pursuing a degree to improve their chances in the job market. The only possible solution to this problem is to end the "everyone must go to college" meme. Good work opportunities that don't require decades of education would go a long way.
It was eye-opening for me to see how much culture affects the perception and even the understanding of the concept of cheating. People from some non-Western cultures, honestly AFAICT, cannot understand why they should not 'help out' a cousin or older student or a military superior.
Only anecdotal, but I personally know of one case in China where the bulk of a doctorate was done by a student's friend. Can't speak to how widespread this is, but the problem is real.
Having taught at several institutes in the Anglosphere, a disproportionate amount of cheating I've witnessed has come from those who speak English as a second language. Difficulties with English and the need to 'save face' probably push otherwise honest students into doing this. Which also says something about admission tests. It's something of an open secret that administrators turn a blind eye to suspicious admission results, as enrolment and tuition money are a strong incentive.
It's not an easy problem to solve, but I suspect proper admission filtering would eliminate the bulk of cheating, and the remainder could probably be dealt with by empowering professors.
If the professors or TAs are paying attention, they already know who the cheats are. It's not hard to spot when weak students suddenly start acing tests and mastering skills they didn't understand the week prior. There needs to be more leeway, institutional support, and incentives for academics to challenge suspicious students, e.g. via oral examinations.
Trying to solve this using software is just an attempt to avoid tackling the real problems.
> It's something of an open secret that administrators turn a blind eye to suspicious admission results, as enrolment and tuition money are a strong incentive.
Money talks. TOEFL (Test of English as a Foreign Language) scores tend to be treated as gospel, even when it's clear from emails that the student has nowhere near the level of proficiency indicated by the test results.
Consequences?
A friend of mine copied an assignment to the point that he even included the other person’s name in the code.
He got 100% and a note telling him not to copy again.
There are usually policies to give guidance and mentorship on 'academic conduct' for minor and first-time offenders; but if the cheating was so extensive, then that's a serious dereliction on the examiner's part.
I can understand why it happens though. For the professor to take it further, they'd have to escalate it within the school, involving a whole load of bureaucracy, meetings, production of evidence, and likely a formal hearing with the student, etc. Unfortunately, there's often no incentive or institutional support for professors to do any of this.
While I was at university for computer science, I often found many "senior" students (both graduate and undergraduate) who were truly astonishingly unversed in the most basic things (I think my "favorite" was the time I did a group project involving a student who had had an internship working in the kernel at Sun the prior summer and he turned in his portion of the project (in C) with code like "[...] char* foo = ""; foo += "bar"; [...]")...
...and yet I never thought to wonder how they managed to keep passing exams.
Maybe we should stop endlessly pushing and insisting on degrees...
Maybe skills tests in an interview process is not such a bad thing.
As an employer, I don't care what piece of paper you have from what school. I only care about how well you can do the job I need to be done.
>> As an employer, I don't care what piece of paper you have from what school. I only care about how well you can do the job I need to be done.
To be blunt, duh. I agree degrees are very imperfect & crude indicators of what you're try to measure, but what is your alternative? I'm getting DoS levels of applications blasted at me, how do I filter to an effective level that removes the same cheating mechanics mentioned here?
I built tested recruits.com and skills test every applicant. You can use it free if you want.
Exactly this. I've given many chances to people that didn't have the paper and almost of all of them have risen to prove not only are they just as smart and capable, but that working in technology was a passion/hobby. I don't care about paper at all, it doesn't hurt and I value education, but it's never been a deciding factor when it comes to hiring.
I don't think you want a doctor without a degree, and there are many similarly difficult professions with a lot of impact.
Yes and no. Doctors go to (effectively) trade school after undergrad, and it's actually that degree you care about. Does anyone care where they went to undergrad?
Maybe in the US. Medical education in the UK starts at the undergraduate level, there is no pre-med.
I agree, but got mine later in life because not everyone does and I didn't want not being in the club to block some opportunities.
> Maybe skills tests in an interview process is not such a bad thing.
I'm fine with this in theory, but lots and lots of places bungle it so badly that they're no longer testing programming skills, they're testing rote regurgitation of memorized code under pressure. If your job legitimately requires that, for everyone's sake get out of the industry.
Is the implication here that seeing that they have a degree is better than what "lots and lots of places" do?
Personally, I think a degree doesn't say much about a person's ability to do what is required in the job.
Maybe we should just get better at giving skill tests in interviews.
> Is the implication here that seeing that they have a degree is better than what "lots and lots of places" do?
Testing whether they can remember a specific leetcode question is worse than checking for a degree.
Is that controversial?
Caltech has a strong culture of academic honesty and take-home exams.
Maybe some institutions are simply selecting their students wrong?
With a CS degree a better solution is make it project based. Even if someone uses code found somewhere, just changing it up to not be obvious, that's not that different than actual industry, AND means they had to understand it at some level. And that's for stuff they -can- search for (such as 'implement a data structure/algorithm').
There's software to detect copy-pasted code or plagiarized (it analyses the syntax tree so renaming the variables and changing indentation won't fool it!).
The best however, is a 1:1 code review with a TA or Prof. Randomly jumping to files, reading the code and asking questions about it. Extremely hard to copy a codebase and learn it well enough you can explain it but really easy to do if you wrote it yourself.
I have to wonder how well that analysis code works in practice. for a lot of intro-level (and some mid-level) course assignments, there's only one or two straightforward ways to write the code. sometimes these assignments are just pasting together sample code from the powerpoint with a couple tweaks. I was a TA for a few of these courses, and the correct solutions tended to be very similar. I was only sure students were cheating when they made several of the same mistakes.
re the 1:1s with TA/professor, I agree. if you can explain code that you didn't even write, you might be even more deserving of that A.
It's completely useless for intro courses. But in my experience, intro courses are completely bimodal for CS: some students just get it and some just don't.
Sure the later can copy their assignments. But once they'll hit a non-trivial assignment or a code review in a more advance class they won't be able to fake it anymore. You are just delaying their transfer to another major. And that gives the CS department one extra course they can collect tuition from and justify hiring TAs for...
100%. We had similar software when I was a TA, and I just ignored it and left it to the professor to deal with. I found a few cases of obvious plagiarism though, for the exact same reason you mention, too - the same mistakes.
Yeah, this description immediately brought to mind FizzBuzz. In a C-like language, there's basically a whole 3 ways to write it without doing something esoteric (loop a switch, loop an if/elseif construct, try to be cute with independent ifs and a boolean).
There's always going to be software to defeat those tools! I've done my fair share of experimentation with source-to-source transformations; you can do things like substitute for/while loops, change conditions around, inline/outline various constants and variable declarations...
The sky's the limit when you think about it really.
Sure projects are generally better than exams. But cheating on projects and papers is also rampant. Students just pay someone else to do the work.
Here's my solution to this:
1) Version control all assignments. I force them to make periodic commits throughout the week. If they want to copy a project wholesale, they have to do so in a way that takes a lot of time and effort.
2) Comparing bytecode. If they change the names of variables, comments, spacing, etc. these superficial differences will be lost in compilation.
3) Oral explanations. Now that everyone has guaranteed access to screen recording software, I force my students to record an explanation of their work. They go through their code line by line and explain it to me. I can tell very quickly when a student is explaining code they haven't written. If they are explaining code they haven't written to a satisfactory degree, then they've at least demonstrated they've learned something.
4) Remove incentives to cheat. Give them all the resources they need and more than enough time. Provide easily accessible venues like chatrooms and forums where they can ask questions anonymously. Provide opportunities to improve prior poor grades so they feel like failure is okay.
This doesn't eliminate cheating 100%, but from my experience it does seem to cause cheaters to fail, as their cheating does not pass for acceptable work. The students who earn an A come to see me during office hours and I know they are doing the work. I would say very rarely does someone earn an A who is not on my radar as being an obvious high-performing student. If a student is cheating and earning an F or D, I can't say I care much about that.
Notice none of these methods involve an invasive surveillance regime. I don't require cameras on at all time, 360 degree views of work areas to prove no one is helping, software to monitor tabs and processes... all this is completely futile. The students hate it and it incentivizes them to find ways to thwart it. They feel justified in doing so. I saw one post that advocated running a high speed fan to drown out typing noises, wearing reflective glasses in a dark room to foil eye tracking software, putting Vaseline on the camera... all to what end? To cheat on a psych exam? It's an arms race that's not worth fighting. The solution here is to adapt teaching methods with new technologies and testing methods.
#1 just encourages people to make BS commits throughout the week to pad the numbers.
Not everyone does projects over time, not everyone commits all the time. I don't like committing in progress work and I usually do work in big chunks with large breaks in between, especially for solo projects where I don't have to share code regularly with others.
Based on your criteria, I'd likely be considered a "possible cheater" just for having a different style.
The way I explain this requirements to my student is that the proof-of-work is just as important to me as the end result. It's like in math class: you don't just write the answer, you need to show your work. Your commit history is not distinct from the assignment, it is part of the assignment.
Regardless though, my students usually are at a point where they don't have a commit style. They come to my class with no knowledge of git or version control -- I have to teach it to them.
Secondly, labeling you a possible cheater doesn't condemn you to anything, it just puts you on a shortlist of assignments to examine more closely.
Thirdly, this style of committing has helped more than a few students out when their computers die mid assignment and they need to move to another computer. It also allow me to help them with their assignments. When they come to me for questions, the first thing I ask is "have you pushed everything to your repo?" Instead of trying to debug over a virtual desktop connection, I can run their code locally and get to the cause much quicker in my own environment. It really works out well for everyone involved.
This is for a class. How a student prefers to make their commits is not really important. If you are taking a class and the requirement for credit is doing commits of in-progress work you commit the in-progress work.
These are awesome tips - thanks for sharing!
Question: if (when) a student hands in an assignment without the periodic commits, what happens? How do you measure & set expectations for sufficient frequency of commits? (i.e., 1 single commit is too few, I assume that once a day is enough, but where's the threshold?)
Thanks again for sharing - I'll be thinking about these for my own classes!
The first half of the class they are getting used to git and version control, so I don't enforce it too closely. Generally I decompose the assignments into a number of parts, and ask for at least one commit per part. e.g. Part 1, lay out your project directories. Part 2, stub out your functions. Part 3 implement function a, etc. My students are at a level where they need this granularity of direction. They get lost easily if assignments are too open-ended.
In terms of getting them to follow the commit guidelines, I give them a carrot and a stick. The carrot is, if they start N number of questions half way to the due date, they can get a penalty-free extension if they need more time to complete the assignment. This is to incentivize students to start early. I've found one of the biggest indicators of successful students is that they start assignments early. Students who start their assignment on the due date usually run into blocking issues (software not configured properly, computer not working right, network is down) that would be trivially overcome if they had started a day or two earlier. Students who have this habit quickly correct within an assignment or two.
The stick is by the midterm, I start requiring it more strictly by subtracting points for not following the guidelines. I think of it like a math assignment; if all you provide on a math exam is the solution, you'll get little to no credit for that. Showing your process is part of the work in math, and I treat it the same way in my classes.
Not going to comment on the listed steps (that sounds like a pain to grade), but I will totally echo your comment about knowing how well someone is doing.
Every semester, students fell into the same buckets. - Those I rarely saw who nailed everything (and were getting an A or a high B because they understood the material), - Those I saw constantly who were going to get an A or a B (because they sometimes didn't understand things from class, or when they did, wanted to make sure; or for a few, understood it all, but knew the importance of showing they cared about the material to the professor and TAs) - Those I saw constantly who were going to get a C or a B (they rarely understood from class, but were willing to put in the work to try to) - Those I never saw who were going to fail (they didn't show up to anything).
It was pretty much invariably this latter group that got caught cheating. That's not to say the other groups didn't, but they at least got working code, did really well on the exams (pre-COVID, proctored in person), and in conversation were able to show they understood, so I'm a lot less concerned about them.
It's great that you run your courses that way and I'm sure your students benefit from the experience. But is it really scalable for lower division courses at public universities with high student to instructor ratios?
I've used this technique over 3 semesters so far, with a total of 500 students. My team is myself, a TA, and 10-15 undergraduate graders. I try to triage grading with automated tests and a bunch of tools to give me an overview of how students are doing their work (looking at the first and last commit, commit frequency, etc.). The most work is in assessing the oral assignments, but I actually like doing those the most - I get to hear from my students in their own voice, which is nice during this isolated pandemic.
I think it could possibly scale to larger classes as long as the grading team scales with it. I'm not sure how public universities handle their grader allocations.
Which presumably is why a lot of mine also included demos followed by questions about the implementation.
Projects also need to change dramatically year after year or else people will just get the project from the year prior.
They can change slightly (even if someone gets the project from a prior year and modifies it a bit, that still requires understanding), or they can be open ended enough to be obvious if someone directly cheats (i.e., the 'create a game for the Game Boy Advance' project I had)
This is the answer.
No one wants to hear this answer because it means actual work for the teaching staff (compared to, say, automated scoring of multiple-choice tests), but it is, in fact, the answer.
> actual work for the teaching staff
It's not just "actual work", writing high quality tests and projects and then grading them is multiple full-time jobs worth of work. It's equivalent to interviewing 100 job candidates every week, forever.
Speaking as someone who TAed a couple years: I much preferred grading projects to tests.
Projects they had to demo, and they had a published rubric. While it was possible sometimes to play to the rubric, it still meant there was very little work to figure out a grade.
Tests, on the other hand, I had to dig in, understand what they were trying to do, figure out if it worked, and award partial credit. I guess the prof could have done multiple choice to make grading easy, but that has all kinds of problems with actually testing anything.
Yeah, I've done it. Both as a TA and as instructor of record.
The fact that the university doesn't provide adequate staffing is not an excuse for short-changing the students still further.
> actual work for the teaching staff
Actual work for the already incredibly underpaid, over-worked teaching staff. As the money dries up in university funding, teachers are stretched thinner and thinner. That forces them to rely more on automated testing instead of having the time to actually know more about their students' performance. That in turn makes it easier for students to cheat.
The economics (as usual) are perverting the system. The individual participants are merely suffering from the perserve incentives of the system.
more likely, have TAs write the variations while paying them the state minimum wage. arguably a feature if that means you need to hire TAs. a lot of students need work-study as part of their financial aid package.
Your answer indicates you don't really understand the intent nor desires of university profs. It's not to be the best version of the teachers you had in grade school. We can argue if this is THE best approach, but it is how the system is set up.
I taught at a Big 10 university for 12 years, both as a TA and as instructor of record.
So I think I actually understand the system pretty well.
Edit: teaching is part of the job, dude. It just is. If you asked the average taxpayer (you know, the ones who pay the base salary before grant money is taken into account) whether teaching is an important part of the job, what do you suppose they're going to say?
If you don't want to teach, you need to find a job that doesn't involve teaching. If you accept a job that requires teaching, you're morally obligated to provide the best teaching you can.
Or so I see it.
Since the pandemic began, I've been giving my CS students take home exams, and the results have been so great. They are open everything, including the internet, and the students have 4-5 days to work on the exam, which is a project. I just don't have the ability to police 150-200 students as to whether they are cheating. So I assume they will cheat, and give them a cheating-resistant exam. Instead of asking knowledge-based questions they can look up, I've moved to analysis and synthesis type questions, as well as more substantial projects.
Pre-pandemic I would give them a paper exam in which they had to read and identify bugs, or write code by hand. This was the tradition. Now they actually use their tools to the best of their ability, and I get to see what they are capable of.
You might think this testing paradigm would result in everyone getting 100%, but that's just not what I've seen. The distribution of grades the last exam was normal with a mean of 80%, pretty much in line with what I get during a typical semester. As we transition back to a classroom environment, I don't expect to move back to paper-based CS exams. They are just wrong on so many levels.
You sound like a good teacher. My experience at university was that the good teachers were a minority. Most would do their research and the minimum for teaching.
I'm teaching faculty, so I don't do research. Research faculty are right to prioritize their research -- they are incentivized to do so by administration. Their tenure is predicated almost 100% on their research output and the grant money they bring in. Teaching ability is an afterthought when it comes to tenure and promotion. It doesn't make them bad people or bad at their job, it's just that their job isn't really teaching according to basically every signal sent their way by the admin.
I understand the incentives, but teaching is part of their job even if they need to do research, and from the student perspective they are bad professors. If all I needed was to learn it from the book/youtube/internet and take tests, there would be no need for professors and they wouldn't exist.
I've had professors that did more damage by 'teaching' (I use that term loosely) than if they had just said 'learn it from the book/online and show up for labs/tests/exams'. Mostly it was because they had no idea how teaching worked.
It's a very faculty-centric view because from a student's perspective, the teaching is 99% of the school. Very few will go on to research. They come to school to learn and earn a degree. If the University cannot provide good teachers, they are not a good university. If a professor cannot teach, they are not a good professor. That isn't a moral judgement, its just the subjective judgement from the other side of the school-desk.
Kinda disagree, doing a bad job of teaching isn't OK regardless of whether it's incentivized by your job. You have moral responsibilities too, like putting in a good effort for people who are relying on you.
> The distribution of grades the last exam was normal with a mean of 80%, pretty much in line with what I get during a typical semester.
The distribution is one thing, but grade correlation with other statistics might be more revealing. In particular, how do you know that wealthier students aren't hiring people to take the exam for them?
I think one thing I can't protect against is students completely faking their identities. Meaning if I expect Matt Smith, and Matt Smith has hired John Doe to interact with me all semester, I can't detect that, because I don't really know who Matt Smith is. But Matt Smith would have to know that John Doe will be available to impersonate him for the next 3 years because they will be taking several courses with me. If Matt Smith is rich enough to hire a double to essentially go through college for him, then good on Matt Smith for being independently wealthy I guess.
The other thing I do is I don't make exams so weighty. If you ace the exam but you don't do well on the assignments, you're not getting an A in the course. So the hypothetical wealthy cheater will have to keep their body double on retainer for the semester.
But I will say that the students who earn an "A" are ones I interact with fairly regularly. "A" students typically come to class, ask questions, engage with the assignments, start their assignments early, etc. And before anyone accuses me of playing favorites, I don't do any grading, and grading is anonymized. But I can easily predict the "A" students just by their work ethic. The idea of the brilliant student who doesn't come to class and effortlessly breezes through the exam is as far as I can tell after teaching thousands of students a myth. At least at my university. Maybe all of those kids go to MIT.
Is this the responsibility of teaching staff? They're not the police. in the end it's students responsibility to study, exams are just a way to make them understand that should. If they cheat they lose an opportunity to learn. They'll pay the consequences later, at work.
Their employer will pay the consequence later.
The university will pay repetitional consequences for claiming that the student learned particular things which they clearly didn't.
At the university level, there is really no excuse for closed book tests. Quality tests focus on how well students understand the material, and have questions that are complex enough that attempting to learn the material while taking the test is infeasible.
I think conversations about cheating are missing the forest for the trees - or the learning for the degree.
I maintain that cheating is almost always a pedagogical problem first, and a trust problem second.
Cheating becomes a convenient solution to a problem when you're dealing with a course with inadequate teaching, a difficult learning curve, or a lack of motivation for students to do their work to the best of their ability themselves, or a nonsensical curriculum. Fixing cheating doesn't involve surveillance - it instead involves removing the incentive structure that exists for cheating in the first place. This may involve rethinking grading, or course material, or assignments; but is certainly not impossible.
We act surprised when students "cheat" in CS exams that's expected to be done with only pen and paper - nearly any real workplace will give you an option of a text editor or IDE of your choice. So give them an IDE! Give them the API documentation! Don't create an incentive to test the waters to fix the broken rules of assignments.
Another relevant area of work is ungrading, or self-graded courses in general - when you remove the friction that grades cause in the feedback loop of learning; learning becomes an organic process for everyone involved. There's a lot of interesting pedagogical research, and just "cheating is rampant" doesn't scratch the surface of "but why is it?"
In addition, cheating is a game. Every second you spend drumming up cheating in front of your students is another second they think about trying to get away with cheating you. If you tell students they're not to be trusted, they will not give you any reasons to trust them; in many cases it's as simple as that.
A combination of good pedagogical design, and building a relationship of mutual trust with your students, is certainly more fruitful than creating an academic police state (of which Proctorio is only one part of). There will always be people slipping through the cracks, but there are other safeguards in the world to catch them too.
Another important thing is that conversations about cheating always assume a very specific framing of higher education - that they exist primarily as a gatekeeper or arbiter of who-knows-what; the university also has the purpose of providing an environment for learning. And in many cases, cheating is just a result of a failure to provide that environment.
In addition, if the primary beneficiary of university degrees are the employers (or the people who care about the who-knows-what stamp), then why do students foot the bill for tuition? If you choose to accept this framing of universities primarily as arbiters, isn't access to a degree just a head tax to enter the skilled labor market?
This seems so insane to me. This isn't high school, you're going to be expected to actually use the stuff you are learning in just a couple of years when you're in the real world. Nobody is going to code for you after you've graduated and gotten the job. For that matter, how are you going to pass an interview if you don't know the material?
Or are they cheating at the "take these courses to become a more rounded person" stuff, like the insect biology courses for the CS students?
Still, you're paying a fuckton of money to supposedly learn stuff in college. Cheating your way through is a huge waste.
When you've seen exam-sitting-for-hire in action, you believe it. Most students are honest and genuinely want to learn, and the few that are dishonest cause everyone to be subjected to this. It sucks for everyone, students and staff.
Which does not make tracking everyone a solution to the problem of cheating in exams.
Higher level education is largely voluntary, and it's up to the person taking it up to decide what they gain from it. If they are only in it for a diploma, they'd get there one way or another.
To me, focusing on finding "cheaters" makes education a competition. I never felt cheaters got anything over me in my studies, and I never felt like I've got a lesser grade because of them. Does that happen in cases where you've seen "exam-sitting-for-hire" in action?
Even if you normalize your grading scale based on the students taking the exam right then, if your claim that "most students are honest" is true (and I believe it is), that should not affect any non-cheater significantly (unless you've got a small, non-representative group, but then normalizing grades is unfair to begin with).
So my question is: who are we trying to solve the problem for? What is the expected outcome, knowing that there will always be people who "cheat" their way through life too?
While I despise software like this and how high-stakes and petty it can be, keep in mind that it's in the institution's interests to not only provide a quality education, but to defend the value of their brand.
You might have worked very hard for a diploma from University X, but if it was found that University X was handing out diplomas like candy to other students that were not as academically rigorous, it weakens the value of your diploma.
Requiring distopian spyware doesn’t weaken your brand. It destroys it. For most of history, reputable universities built their brand just fine without any software at all.
For most of history, in-person exams were possible.
Totally agree. I personally chose not to go to college (I'm graduating HS this year) because of things like Proctorio and Zoom becoming more common.
Honestly, it's fine to not choose higher education but IMHO use of surveillance methods should affect where you apply, not whether you apply.
No but it's very reasonable to wait a year before applying so that you don't pay out the nose for zoom classes (there are things like socialization and connections that make a super expensive college worth it)
It’s possible that some institutions are dictating blanket use of software, but where I am studying that’s not the case.
It’s basically up to the instructor on how to run the class.
Some require proctorio, some are on Zoom with lots of TAs to watch the screen, some issue an exam on your own time where the browser monitors what you do / visit, and others just offer take home projects.
Unless every other university also implements it, then it’s just par for the course
Yes. This is a problem for universities to solve, with the money they get from constantly expanding tuitions. This is one of many places in which a problem is cheap to "solve" if you are ok being racist/ablist/generally not adapting the variations in your customer base.
Handing over the solution for the problem to a software company shouldn't be an accountability shield for the Universities. Impacted students should sue the universities, and then they should sue the software company.
The role of universities is to provide education not to police students. People put in place reasonable means to detect cheating (proctorio isn't) and then it's students responsibility to learn. They're adults and should value what they get. If they don't, they're missing an opportunity.
It's not even the value of their brand - it's the value of all students' diplomas
Universities don't give a shit about that. And they shouldn't. A lot of diplomas are worthless, but that doesn't make the education worthless.
Ideally though, I've worked very hard to get certain knowledge. Diploma is a nice side-effect that's supposed to be a shorthand proof of it, but most companies in eg. software engineering would test candidates anyway, which is where knowledge helps, a degree, not so much.
When your institution's reputation is at risk, you want to prevent it. Honest students don't get a lesser grade; that's not the problem. If the cheaters want to cheat their way through life, that's their problem. My problem is to make sure the rest of my students end up with a well-respected degree. I.e. the problem is being solved for the honest students, the ones I want to spend my time on.
I get what you are getting at, but I'd rephrase it as "cheaters should remain a minority". When we phrase it like that, a number of other solutions might pop up to discourage cheaters from enrolling in the first place (eg. lots of custom projects through which students learn anyway, potentially invalidating the need for a final exam too).
And suddenly, privacy invasion gets off the table quickly (as soon as you are not aiming for 100% non-cheaters, the cost becomes obviously too high for everybody else).
As far as reputation, I'd rather see schools focus on the successful students, which is somewhat done with all those research-paper-grading systems (not a perfect system by any means because of gamification, but at least idea in the right direction), but mostly done with bragging about scientific break-throughts to come out of their students and staff.
Still, what is the purpose of a reputation or "well-respected degree"? The goal should be knowledge and applicability of that knowledge to actual problems in life (known as "jobs"): it's not like anyone accepts any graduate without interviewing them first, which is to say that nobody trusts any school to have done a proper job of evaluating them. Most of those schools don't trust themselves, so they hold interviews for post-graduate studies too! :D
Nobody looks at the "lemons" coming out of a school to consider it a bad school (I am sure you can find plenty from "top" universities too), but on the successful ones. Do the successful ones change with more cheaters at all? (Sure, there is a turning point, but catching all of them is meaningless)
Well, people aren't born cheaters or not, they choose to cheat as a result of circumstances [1]. Opportunity is one factor. Another is perceived unfairness about what they're asked to do. Our institution becomes better as our students do, when they don't feel the need to cheat. Avoiding opportunities to cheat and unfairness incentives is key to doing that. Invigilation software, the same test for online students as local students, and plenty of online tutor help are three approaches we're using for remote assessments. A small, brief reduction in privacy comparable to their online tutorials is proving perfectly acceptable to students. Unfortunately custom projects and the like won't work for teaching fundamental maths skills.
[1] There's quite a bit of research on this. Bretag is a key author.
Thanks for the references! I am simplifying a bit to get my point across: I am well aware that nobody is a born cheater, and that the fear of getting caught stops many from cheating too.
> Unfortunately custom projects and the like won't work for teaching fundamental maths skills.
I am not sure I agree. One of the projects I did for my projective geometry class was to do an inverse of a projected drawing (in AutoCAD) and save a 3D .obj file. In differential equations course, I was given a project to prove a theorem that is generally missing from the school books (or usually given as an exercise). You can also let people devise proofs in a closed system of axioms and a few theorems.
It is a hard and different type of work, but I am certain you can both teach and get to know students with theoretical math projects. If they get someone else to do it for them, yet they are able to present it in front of the class (online) convincingly, they have likely understood the concepts, which is what teaching is all about.
In many courses, the grades are curved, so cheaters might really make the grades of honest students worse.
While I am against these surveillance software, making sure that tests are fair is very hard, especially since everything is online now thanks to COVID.
Not in my experience. I don't think there are enough cheaters to shift the curve (the noise from year to year is much more significant), and despite what you'd expect, the people hired to do others' assessments aren't necessarily very good at it!
A top 10 MBA program my partner went through had several people cheating in the program. I'll give you some anecdotal notes. Some cheaters get discovered and "outed" (esp those the group didn't like) when they brag during parties, some never gave up enough evidence to make an accusation official, others would talk about hiring CPAs for accounting exams/projects. Many in the class were concerned on whether normal study groups counted as cheating, as defined by the honor code!
A sibling program at the university exposed a cheating ring of ~15-20 people, and I think many were surprised that the result was to simply zero their grades for that class instead of more severe action.
/e I mention incidents and sizes since these programs are not thousands of students but tens or hundreds.
What kind of an MBA program is it where students worry if normal study groups are ok?
It sounds like the definition of cheating was very broad there.
Depends on you, the professor, the college, and the rules as to how you might interpret "individual" homework assignments and study "groups", particularly if you're led to believe you'll get removed from the program for transgressing. This particular issue was fleshed out quickly (and sanely, imo) but it did contribute to why the group didn't like one particular individual. Not everyone in the program is there to network and build trust!
I switched to online exams this year, no zoom, no special software, no checks at all. Grades were as usual. A couple of students cheated and were detected, all the others were very honest to the point that I wouldn't be myself if I were in their place. It can work, but you need to expect people to look up things.
One of my colleagues had an online exam. The exam was uploaded to chegg within minutes, by multiple people. Unfortunately, the tests were personalized, so the people could be identified.
I think cheating is quite common in the undergrad years, but gets less later.
How did you get access to Chegg? I thought of buying a subscription to check for cheaters, but there were complaints on the net about Chegg charging credit cards on canceled accounts.
I didn't, my colleague did. Not sure how. He asked them to took down the tests, and they did. Withing a couple of days though. Answers to questions are up within minutes!
That's what I meant with "normalized grades": I wasn't familiar with the term "curved grades", but I covered that point with why I don't think that's a problem.
> I.e. the problem is being solved for the honest students, the ones I want to spend my time on.
As the original commenter notes, here the problem is being solved for "well-behaved" students, in a way which is easy enough for dishonest students to bypass and present themselves as well-behaved.
Realistically, if all existing diplomas were invalidated and the entire system of diploma-granting were to be stripped away from educational institutions, an equivalent would quickly be re-established by the free market - and it would probably resemble Proctorio.
> So my question is: who are we trying to solve the problem for? What is the expected outcome, knowing that there will always be people who "cheat" their way through life too?
For one thing, all the harm that incompetent professionals can cause - and in a moddern society, the scope of that is not inconsiderable - in fact, Proctorio is probably an example.
Let's not lose focus on the real problem here, which is Proctorio being an aggressive vendor of garbage.
Is there a correlation between having a degree and competence? Because my experience is that there very much isn't.
I would go so far as to say that the information age has made educational material so widely and easily available that the only value universities provide is signaling and wealth/class filtering.
Could engineers that cheated their way through college have developed microprocessors? Built bridges and skyscrapers? Achieved space travel?
Perhaps there is value in knowing someone has been able to work in a team setting and complete given tasks.
They certainly could. For instance, many of the still surviving bridges in Europe were built way before university degrees in architecture. I'll consider them cheaters because they didn't even attend a University! ;)
But there is another way to look at it: can all non-cheating students build bridges, design microprocessors or design rockets? I can give you an affirmative no.
> if all existing diplomas were invalidated and the entire system of diploma-granting were to be stripped away from educational institutions, an equivalent would quickly be re-established by the free market - and it would probably resemble Proctorio.
coming soon to a tech interview near you.
Employers definitely use grades as a mechanism to determine who gets an internship or a full time position upon graduation, especially in the legal industry. The idea that cheaters aren't a big deal breaks down once grades have real world consequences.Higher level education is largely voluntary, and it's up to the person taking it up to decide what they gain from it. If they are only in it for a diploma, they'd get there one way or another.I've never once been asked about my grades, nor asked to provide proof of my graduation in getting a software engineering job.
Then again, I have a hard time envisioning what would someone cheat about in a legal exam, other than not memorizing the things, but that's to me just a signal that the content is badly presented (instead, put students in a pretend courtroom in a case that covers the study material, and they'll have to learn it, and learn to apply it).
But even if we accept it as so, it is not an argument to be so vigilant in catching cheaters majoring in other subjects.
I'm not familiar with the software in question, but I'm quite sure it'd still be possible to have someone else to sit your exam. For instance, you could have an external webcam pointed at yourself, and have someone else in front of your computer writing the actual exam. Maybe even mirror the display so that you can see what your accomplice is doing.
I had experience having an exam with the proctorio. And I see many ways how it can be cheated, at least technically. They also say they monitor your head and eyes movements, which is supposed to show something unnatural if you're cheating. Though I'm not sure it's really possible, and maybe just a security theater.
I think you would be surprised on the high percentage of students that will cheat, given the opportunity with as low risk profile such as remote learning.
My university had an honor system, proctors were explicitly not allowed in the room during the test except to make announcements. Worked well enough as far as I know. Cheating was dealt with harshly when reported.
It doesn't seem to suck for higher admin types...they treat the student like a customer and therefore never expel them for cheating. System is broken because of incentives (of course).
Most people don't smuggle drugs in their rectums, but the few that do cause everyone to be subjected to random rectal cavity searches?
It's just another tool to internalize the discrimination. As long as nobody hits both them and their clients hard, holding them responsible for building the tool as such, for choosing it, or for continuing to use it knowing the issues, there's no incentive for anyone to do better.
Think of it another way. If you knowingly contract a member of the KKK to do your hiring, you can't pretend not to know why people of color don't get hired in your company.
Proctorio's issue may be more subtle than the "in your face" example I gave above but they're there and whoever contracts them does so with full awareness of them.
P.S. Because I'm sure the wave of downvotes is less about people supporting racism and more about ignorance, let me further support my point about such tech with real life examples:
https://www.vice.com/en/article/g5gxg3/proctorio-is-using-ra...
https://www.theverge.com/2021/1/28/22254631/university-of-il...
I don't like proctorio, but in situations where it is really necessary for students to demonstrate a certain level of knowledge (think: medicine), I just don't see a reasonable alternative to online proctoring, given the circumstances.
Why would you say that this is evidence of a 'screwed' system?
It’s hard to see it in any other light. You could always have online exams happen in front of an online board— take the human interaction we’ve always used and just move it online.
Proctorio reflects the cost cutting, hyper systemization of education, which is a trend that— every time I see it— seems to be antithetical to actual education.
In a room, I can make sure that there is no other person there, that there is no big monitor with solutions, notes, whatever. I can also not see whether the students has another browser open with one of the cheat-websites where one of his faster colleagues has already uploaded the solution. I'm luckily in a field where we can do open-book exams without too much problems, which makes the "hidden notes" a non-issue, but one wants to be sure that it's actually the student who solves the problem. That is much easier in a class room.
If someone can use hidden notes or google to get the answers then the exam isn't testing for knowledge or subject-matter understanding.
While it doesn't test for understanding, it absolutely tests for knowledge. Take the medical profession. There are many easily googlable facts a student has to know without google. For example a list of symptoms for a diagnosis (or better: possible diagnoses for a list of symptoms). It's not the only thing, but it's part of the education.
Or in physics (my field), I do want a student to know U=RI. Not google it. Not look it up. I also want them to be able to transform it to U/R=I.
Not the parent, but I say that this is evidence of a poor system because authenticity of learning is obvious to a good professor. My best professors would always have a feeling of what ideas or strategies I would employ to solve their problems - if someone else did my project or wrote my paper it would be completely obvious.
Any of these anti-cheating systems are trying to cut costs by enabling weaker teacher-student interactions.
Higher education is already quite expensive. It would be great to have highly qualified professors spend more time directly interacting with students, but who will pay for those extra professors?
Why is higher education so expensive, anyway? It's already more expensive to go to college than it would be to just hire a set of PhD holders as private tutors for most fields
What a scam
Higher education is so expensive primarily because of the widespread availability of government subsidized student loans. Those have removed most of the pressure on prices. If we eliminated those loans then tuition would fall drastically, but that would also reduce educational opportunities for low income students.
In the past, we solved this problem by not having online proctoring, we had exams in front of a board. Maybe if the necessity of such demonstrations is paramount, doing them online is not the correct answer.
I guess this is a consequence of the Covid pandemic, where such a thing is not allowed at the moment.
The software - problematic as it is - is older than the pandemic. Also, maybe if you cannot reliably proctor exams, the solution should not be to use a highly-invasive, potentially problem-causing software, but to cancel the exams and the academic year until the pandemic is over. After all, if you cannot give exams, students also were not able to take part in studies to the same extend as before (e.g. library and/or lab access would probably be limited)
The software is indeed older than Covid, but its common usage by widely respected schools is not.
So your preferred solution would be to just stop all education for the duration of the pandemic? Although online education sucks, I'm not sure if most students would agree with you.
In all fairness, as we agree that the education experience (and the education?) does not come up to the pre-covid standard, I can see employers discriminating against students who graduated under these circumstances. Students should have an incentive of that not happening, and thus welcome a gap year.
> How screwed is the system that anyone one would believe this level of intrusion should be part of the education process.
It's all a lot of smoke grenades to cover for:
- an education system/process that at its core is not about actual learning but "bulimia learning" aka memorizing the facts the profs deemed relevant for the exam and forgetting them the very second the exam is over to make space for new useless stuff that will be forgotten just the same way.
- an employment system that has "optimized" to needing as-standardized-as-possible papers that certify potential employees of having skill X so that hiring managers can easily separate between candidates that are "worth it" on paper without having to waste time on "unworthy" candidates - something that obviously fails as it chucks out a lot of the people that aren't built for bulimia learning but can't prove that (unlike an awful lot of the "certified" people) they actually know what they're doing
- "education" institutions that are more interested in getting grant money and income from student tuition rather than on training actually talented students
- and as a root cause of all of that: employers believing they need "university graduates" when the good old German-style apprenticeship system works just fine... with the side effect that apprenticeships cost the employer actual money for years for training the apprentice, whereas with university the students and their parents pay the bill for, sometimes, the rest of their lives
Education and employment is in dire need of reforms. Universities should be serving only those who are actually interested in science, companies should pay for educating the workforce they need instead of forcing generations of young people to take on unsustainable debts, and schools should be reformed to actually provide stuff people are going to use in their later career.
I suspect there’s a strong case that universities are violating ADA and state equal-access laws if they’re using proctoring software that unfairly penalizes individuals with glasses, to say nothing of the issues raised by the potential of racially-biased AI. A few class action suits against state university systems might well be warranted.
Not defending Proctorio, but: As I understand it, the software results aren't directly used to penalize students. It just flags the situation for review by an actual human if the software doesn't have high confidence that the student is looking at the screen most of the time.
That doesn't mean there isn't a violation here, of course.
The bigger problem I see is professors that think "Oh well it's sophisticated AI, it can't make a mistake" and take the results for granted.
In one of the twitter thread I saw a screenshot form a professors email where they were mentioning that "X student had 100 more eye movements than Y student" and threatened to fail the entire class.
That email blew my mind because it seemed like the professor just didn't know or didn't care that the software was the problem here. And that's the real issue.
If the email mentioned particular students by name then that in and of itself is a violation (at least in California). We're absolutely not supposed to share any student performance information with anyone other than the student, without permission.
That's federal. FERPA.
Yes, that is exactly how it works (I TA for a class which uses it). Proctorio flags things it thinks are suspicious, and when we look at the video we usually find it was something innocuous (pets are a common culprit). We do find a few people actually cheating though so it definitely has its merits.
I had to use Proctorio for a stats class that I took, and two or three times per test it asked me to lift my laptop up and rotate it so that it could see the entire room. The worst part was that when I was done the button to return to the test never showed up. Each time this happened, I had to contact customer support to get them to unlock my screen.
The distraction this posed had a measurable impact on my scores on these tests.
You should demands points on the exam for successfully completing the instructions of the exam. Anything which can count against a grade must surely count toward a grade.
> So it is crucially important that this type of software receives a lot of scrutiny to make sure it works as planned.
What's crucially important is that this type of software is BANNED. I now have a new question to ask schools when my kid applies to one.
It's not a trivial problem to solve. The amount of cheating that goes on at the university level is pretty astounding.
Maybe we need an honest to God religion forbidding the use of non-free software, and then we too can play the religion card. Class orders you to install Proctorio? "Sorry, that's against my religion." Fail the class? It's because they required me to sin against my religious beliefs and I would not.
Of course, like all religions, not all members will follow the beliefs all the time. At least that's what I tell myself while sinning with a non-free video game I installed. Many a pastor has molested children, but the courts still recognize those religions. So when the founder of this religion is caught duel-booting Windows, we'll just remember that the religion is divine, even if the people aren't.
It already exists: https://stallman.org/saint.html and if people can get Jedi recognized you should be able to get that recognized (but the courts basically look at do you really act like you believe).
Ah. Prictorio: education spyware just like Socrates used to impose on his pupils.
Seriously, the day my university required this would be the last day they received a dime from me. And I’d be getting a refund for the current semester.
This is the kind of thing used by companies with terrible management. I hope universities that rely on these extreme tools come to realize how badly it reflects on them.
Proctorio sounds like Factorio for assholes.
The whole thing sounds like bad fiction from some dystopia... and to make thinks more comical their name could describe a rectal examination or something.
I really hope institutions using this rethink and the company just dies.. some things should not exist.
My last uni used this and I had another major complaint: I had to show my drivers license to the person in the call center in India every time! None of my questions about what kind of privacy controls existed could be answered... and being a linux only person, it was a major pita to either get them to understand I don't have windows or to not give me a hard time if I was using a VM because who tf trusts that software? (the grwat irony being this school constantly touted its cybersecurity degree programs)
>looking around in the room or blinking too much
>And I tend to close my eyes a lot because it helps me visualize the problem
Nvidia has added eye-correction feature to it's Maxine platform(SDK with set of ML features for video conferencing) it can correct our eyes in real-time to show that we're looking at the camera even when we didn't. When I first saw it's demo the first thing which came to my mind was these proctoring tools[1]. It's a matter of time before all major video conferencing tools add these features or 3rd party plugins/hacks which enable it.
I'm not against these proctoring tools, Especially since there are not many options during lockdowns but considering what's at stake they deserve all the scrutiny they can get and if a company threatens with lawsuits for genuine criticisms it tells a lot about their business practice; Sadly this seems to be very common in the e-education sector(Checkout unicorns claiming to have placed 10 year old in Google after taking their 'coding' class and the retribution faced by activists for showcasing the lies).
It's a matter of time before all major video conferencing tools add these features or 3rd party plugins/hacks which enable it.
Apple demonstrated this in Facetime about a year ago, but I don't know what happened to it. I don't use Facetime, so I can't confirm it was ever implemented. But clearly it's on big tech's radar.
I see, thanks for sharing. I don't do any video calls either (or any real-time comm) I suspected that this feature might be already in some video conferencing software considering there seems to be a need gap for eye-correction features.
From your Insider link:
> LSU student body president Stone Cox said that the fees, which could come out to $300, were prohibitive for students.
What the hell? Not only are universities mandating students infect their systems with malware, but they're making the students pay for the privilege? That's ridiculous.
If you think this is bad, I've been sent online coding tests that do the same thing after applying for positions. They require you to keep your camera on and record you as you complete the tests.
Thankfully, it's a good filter for deciding which employers I don't want to work for. I can only imagine what it's like working for a company that trusts their employees that little.
Wow, is that software legal to force upon your students? If you tried this in Europe, even with the student explicitly clicking "I consent", you would still be facing pretty serious GDPR penalties because the student had no other option than to accept making their consent invalid.
It would appear that it's already in use in Europe and has win in court.
https://gdprhub.eu/index.php?title=Rb._Amsterdam_-_C/13/6846...
> The Amsterdam Court of First Instance rejected the request by student representatives and an individual student for a preliminary injunction against the use of digital surveillance software for exams by a public university. The court ruled that measures against covid-19 did not allow for a suitable alternative and the processing could therefore be based on Article 6(1)(e) GDPR.
The Netherlands seem to really embrace these platforms. I think for example Chromebook based exams have been big there for quite some years now while it's almost unheard of in Germany.
I don't understand how that passed.
> (e) processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller;
What's the point of GDPR if you can side-step it in this way?
I wouldn't be too sure about those GDPR penalties. Dutch universities are also using Proctorio, with the recorded footage being sent to the US: https://www.volkskrant.nl/columns-opinie/opinie-universiteit...
That would allow all those students to exercise their GDPR rights against Proctorio and their university. Probably just takes a couple hundred to completely overwhelm Proctorio and cost them so much money that it makes more sense to drop the small Dutch market. Or to get them in a lot of trouble if they refuse to give all students access to all data they have.
Will take the privacy regulators a few years, justice moves slow. But if I was a Proctorio shareholder I would be very very unhappy if my company got stuck in years of unresolved legal issues with unknown (and high) potential fines.
That's one of the reasons I wish more people would run Windows 10S or whatever the store only version was called. Or iOS.
Can't install shady third party drivers, can't install stuff like that that mess with hypervisors (and anyways, every app kind of runs in a sandboxed process anyways) and your app can go in the background at anytime!
Don't like it? Roll your own devices.
I really hope this was meant as a sarcasm. There are already banking apps and some mobile games (hello Pokémon GO!) pushing for trusted computing where the user is not in control and only a manufacturer OS image is installed. Not to mention game consoles that are like this by default.
We hardly need more entities pushing for this dystopia...
Here I was assuming it was a proctological mod for Factorio that modifies how inserters work. Reality is infinitely worse.
> or having eye and/or skin colors [1]
Yet you could only link to one example for a totally different program ExamSoft.
They had a thick dark beard above a dark shirt and dark glasses with reflections (on a much lighter skin tone). They also look like they have a second eyebrow due to the lighting on their eye lid and glasses.
It could be their head shape tied to their race I guess, but you have a sample of one from a different program which we know nothing about what it's doing.
> "or reading questions out aloud [2]" > "I am so glad that this kind of abuse was not yet common when I was in university. I love sitting in the (dark) basement, it helps me concentrate."
Compared to going to exam rooms full of people with noise and lighting outside of any control? When were students allowed to read out loud in the 'old' days?
We are in a pandemic, millions are dying, I don't get this attitude, what's the alternative? not do exams? I know I cheated, I know most other people at uni cheated with the old system which was hard to cheat at. Hell yes we would all cheat more if the new system allowed us to.
The videos in question are still on Youtube under Proctorio Reuploads for what it's worth.
My favourite Hacker News game to play is to count all the hoops people will go through to deny the fact that maybe, just maybe, race can be a factor in people's experiences.
I'm aiming for a shadow ban so I'll be frank.....
https://pimeyes.com/ can find the shitty twitter photo of their computer rejection screen from "ExamSoft" across the global fucking web.
I put in the photo from twitter and it popped out. And that's the fucking stupid 'photo' of the fucking stupid 'rejection screen' on a program that's not even Proctorio.
Sooooo Pimeyes is Polish, maybe it's the programmers from the USA and the fact you have all lost your minds and want to be racist because it makes up for you lack of ability to program. Because the Polish can facial recognise on a USA phone photo of a rejection screen.
Yes Dang... here it is, you did this on a different comment but we know this is the one it's about (Still keen for shadow ban, replies are the worst)
wtf.
this is some serious 1984 type shit.
> In case you don't know who Proctorio is:
Procto-: Anus; (more frequently) rectum;
-torio: radioactive chemical element (in spanish)
Therefore, it is a radioactive anus?
As a remote accredited degree candidate (pre-pandemic so I did it before it was cool) the only alternative to Proctorio was finding an actual proctor in my area who was subject to approval by my university and who I would have to pay to sit and watch over me. Every exam was a multi-week hassle that I had to coordinate with the added stress of last minute cancellations. I much prefer Proctorio. Yeah, it’s intrusive for about 2 hours but then you can close it and go on with your life. Not really a big deal.
Except the fucked up cases op linked and then some. Sure as long as it always works for you it's great and you can just ignore these to avoid uncomfortable thoughts, but once you're affected I bet it turns into a whole different story and suddenly you turn into one of these hysterical tinfoil hats yourself.
His GoFundMe if you want to contribute: https://www.gofundme.com/f/stand-against-proctorio
EFF: "...The Electronic Frontier Foundation (EFF) filed a lawsuit today against Proctorio Inc. on behalf of college student Erik Johnson..."
GoFundMe page: "...My name is Ian Linkletter... wired over $50,000 to Arvay Finlay, LLP, doubling my legal defense fund. John Trueman is joined by Cathie Boies Parker, Q.C., and Mark Underhill..."
Different lawsuits?
I think @dang did a faulty merge between two different, but related articles. This was the original article it was from:
Yikes - I didn't realize there were two distinct lawsuits. I guess I foolishly assumed that they would only shoot themselves in one foot?
I'm not sure what to do now that these threads have gotten blended so thoroughly. Will figure something out.
If you shoot yourself in one foot you get a nasty limp. Two is much better, for balance.
No worries dang, mistakes happen :P.
As someone else said, this originally linked to a tweet by linketter, who is being sued by proctorio. Similar but different scenarios
I feel like this comment should be at the top - for anyone who feels strongly about this issue and didn’t spot this link further down in his Twitter thread.
Have an upvote.
Totally agree
Just donated. Thanks for this.
They should have just ignored this guy and gone on with life. Now the they got the EFF to deal with and it serves them right.
There is another HN thread on this and a lot of people in it are more upset about the line of work Proctorio is in and how they do it than the fact they sued this minor irritating student. My opinion is there is simply no easy pain free way to do fraud auditing and that's that. I just recently took a professional PeopleCert exam online with a guy watching me through my webcam in a closed room I had to display in advance etc and temp installed some invasive application. So what ? If there was a better way to deal with the unfortunate fact that some people will cheat then I would be all for it but just getting all shook up about a temporary set of specific restrictions for a singular type milestone event is a little unrealistic in my book.
But I am glad suing this kid is blowing up in their face.
Maybe catching a few cheaters here and there isn't worth implementing The Panopticon for everyone else.
The cynic in me believes without any disincentive to cheating, “few cheaters” become “almost all participants”.
I was a TA in a prominent CS university. We used software that would compare everyone’s submissions to see if things were copied. The penalty for getting caught was high, and we made it clear to everyone that this software was, while not perfect, capable of detecting simple tricks (like copying code and renaming variables).
Nonetheless, after every project, I would go and have to manually review all the cases. There would be 10-20 severe cases, about 30 moderate cases, and then about 50 cases where some things might look suspicious but there wasn’t enough proof to act.
Overall, we would penalize about 20-50 people per project (granted this is a class size of over 1000). But still, 2-5% would still cheat despite our repeated warnings that we could catch them.
I shudder to think what would happen if we had no way to detect.
While TAing there, I was also a student. Cheating was pretty much the norm. I was always a cynic after those experiences.
But isn't "cheating" the norm in real-world jobs too? It's rare that I or any of the developers I work with complete a task without looking something up or asking a question of someone more knowledgeable.
Seems to me the problem is how the assignments are posed – If the goal is to create a program that does X, and I can do that by copy/pasting or tweaking something from StackOverflow, have I not completed the goal? But if what you actually want is for me to understand all of the principles behind a program that does X, well that's a very different thing. And the assignment should be set up differently.
It feels like Proctorio and similar solutions are treating a symptom of the real problem, which is that the way a lot of higher learning is administered is inherently flawed. And until we fix that, won't cheating and cheating detection be the same Coast-Guard-vs-smugglers arms race it's always been?
> But isn't "cheating" the norm in real-world jobs too? It's rare that I or any of the developers I work with complete a task without looking something up or asking a question of someone more knowledgeable.
One part of one my math exams in a previous university was (I feel like) modeled around this idea, where you were allowed to use a non-programmable (graphical) calculator and to bring basically any written material in to help yourself solve that part of the exam. Of course there were some restrictions: none of the solutions to the various homework assignments etc. That part alone for the ~2 hours of the whole exam would've been nice if there hadn't been the other half: no calculator, no helping materials apart from those maybe provided on the exam sheet. This part (of course) was the one containing the questions about specific definitions, one or more things to write a proof on and calculating things like double and/or triple integrals, deriving complicated expressions.
In other courses (Databases 1 and 2, Web Dev) at my 2nd/current uni with some specific professors there was a clause that you could bring with you help in the form of: "DIN A4 sheet paper, hand-writing on one side, non-copied" and the professor or the TA if one was present would pass through the rows during the exam to check the student ID, have the student sign a presence sheet and whether the help sheet was compliant (also if maybe there were some answers to the exam), signed that help sheet and would collect that together with the exam upon completion/timeout.
Although our course was only ~80-90 people at the start of my first and ~30 at the start of my current uni, so we had to be thrown together with some other disciplines/outlines ("Studiengänge" in Germany) for exams of the same courses to have an exam that made sense for everyone.
Please don't muddy the waters like that. I'm sure you understand perfectly well that when a job asks you to do X, they just want to get X done, whereas when a school asks you to do X, they want to test your skills or make you practice.
Isn't the purpose of school to train you to where you can do X for a company? If standard practice in the field is to look that up, I fail to see the value in forcing you to remember how to do X so that you can later forget it and end up Googling it anyways.
I generally find it silly to teach things that you can adequately grasp from Google in 10 minutes. That's not to imply that anything can be learned from Google in 10 minutes, but I find it dumb to spend time on that in a college. Learning how to implement on optimized bubble sort? That's kind of dumb; in the real world you just use an existing one. It's almost certainly faster than what you would have come up with. Learning recursion? Yeah, that probably deserves to be taught because some people really struggle with it.
> If standard practice in the field is to look that up, I fail to see the value in forcing you to remember how to do X so that you can later forget it and end up Googling it anyways.
I agree 100%. I also don't see the value in forcing people to memorize googleable facts. It's frustrating to see schools implement curricula like that. However, I disagree with your implication here. You're implying that because schools make you learn stupid things, everybody should just cheat. What's the point of that, exactly? You think the curricula is not optimized well, so you would rather reduce the entire institution to a diploma mill, instead of simply learning with the unoptimized programme?
> Isn't the purpose of school to train you to where you can do X for a company?
That may be true for a vocational school, but for a computer science programme at a university, it is not true at all. If you look up the stated purpose of computer science programmes, they are about educating people for the sake of knowledge itself, not to achieve an ulterior purpose like producing a useful workforce.
> Learning how to implement on optimized bubble sort? That's kind of dumb; in the real world you just use an existing one.
If you memorize a bubble sort algorithm without understanding, just to be able to repeat it back verbatim in the exam, I agree that's dumb (and it's also dumb on the school's part to create a programme that incentivizes such behavior). However, that's not the only way to learn bubble sort. You can also approach it from an "algorithm design" perspective, and strive to develop the skills needed to design and work with algorithms. That's not dumb at all. In fact, I would argue that's one of the core skills that a computer science education should provide.
> in the real world you just use an existing one.
No, you wouldn't. You would never use a bubble sort in the real world, because it has O(n^2) time complexity and it has no redeeming qualities when compared to other algorithms.
So let me get this straight: if someone at work asks you to "implement sorting for these results", you might just google "bubble sort java", copypaste code, test that it works, and ship it? And you don't think there's any value in learning what properties these algorithms have or how they work? Don't care, don't wanna learn, cheat through school, end up using bubble sort in production. Great. That's exactly the type of attitude that causes 5 minute startup load times in GTA.
> But still, 2-5% would still cheat despite our repeated warnings that we could catch them.
There is research that the certainty of getting caught is the strongest deterrent (as opposed to the severity of the consequences if someone gets caught) and yet, assuming the software was as good as you suggest and assuming the students were adequately and repeatedly warned, 2-5% of the class still chose to cheat. Which begs the question: why would so many (presumably logical and rational) students take such a risk despite knowing they were substantially likely to be caught?
I have to imagine those students, for whatever reason, felt cheating was their best (or perhaps only) option.
The cynic in me believes we’ve created a system that strongly incentivizes getting an expensive piece of paper with a value completely divorced from what it was intended to represent: having studied.
The problem is that cheating (if allowed) does not end up being a few people here and there.
Even if I were to believe you, so what? So a person gets a degree they don't deserve. Now they have all this student loan debt they can't pay off because they can't get and keep a job in their field because they didn't actually study.
Or they do get a job and are able to keep it down, in which case, so what if they cheated? They're clearly capable. Let them keep doing the job.
My solution: get rid of degrees completely. Can't cheat if there's no test to cheat on.
I don't disagree with you, but our current system that we have in reality does not handle things well if you just allow rampant cheating.
Restructuring is a good idea but you don't want to throw the baby out with the bathwater in the meantime.
I actually agree with you here, our incentive structures are flawed for sure.
The problem is, hiring is expensive, hiring mistakes are expensive, and new grads are a combination of “lack of signals” and “need lots of time to pan out”. The result is companies look for any possible signal - this more or less becomes a degree, and within the degree, the GPA. Thus incentivizing the student to get the degree with a high GPA by any means necessary, and companies will still give you a year to ramp up.
I'm probably just agreeing with your agreement but being pedantic but.
Isn't effectively the entire problem with university/college in the current age that instead of its original purpose it's treated as a hiring filter for companies. Shouldn't we do anything we can to disincentivize use of college degrees as a signal for hiring?
Like, doing the job of software development a degree is completely irrelevant. Computer science degrees shouldn't be 4(?) year long coding bootcamps, they should be about computer science for people who are interested in computer science. (disclaimer I never did a computer science degree, I did chemistry but I felt the quality of the degree was similarly afflicted). Anything we can do to make college as a hiring signal worse for companies is better for the quality of education, better for people's financial health and better for equality.
The entire system is rotten and we need to bring the edifice crashing down, not make life easier for companies who are about the only entities with money. You know what costs a lot more than hiring relative to the entity's financial means? A student loan [citation needed I guess].
The HN post you mentioned now seems to be merged with this one. Very confusingly. This puts yours and other comments way out of context.
Why do the mods/dang do this? Is it just to avoid multiple stories on the same topic on the home page? There’s got to be a better solution than this. At least providing some sort of log of changes would be helpful.
I've been encouraging HN to merge comments on duplicate posts for a long time. If that's what happened here, I applaud it.
dang has given an explanation on the merge https://news.ycombinator.com/item?id=26907466
Those are different lawsuits. The other (previous) article was for Ian Linkletter.
> This is a civil action seeking a declaratory judgment of noninfringement under the Copyright Act, 17 U.S.C. §§ 106, 107, as well as injunctive relief and damages for misrepresentation of copyright claims under the Digital Millennium Copyright Act (“DMCA”), 17 U.S.C. § 512(f)
Oh cool, so the EFF is gonna claim a false DMCA filing. I hope the student will be made whole financially after what he had to go through; apparently he's been fighting this for a year.
They need to go after the lawyers who signed off on the notice. That is the only way to get real accountability.
I agree, especially considering that filing a proper false DMCA takedown requires perjuring yourself:
> A statement that the information in the notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the owner of an exclusive right that is allegedly in-fringed.
In these situations, lawyers are renting out their credibility to bully people who can’t afford to go to court even if they’re in the right.
I think that being charged with perjury is a penalty too high for lawyers as a group to accept could be a consequence of simply filing a DNCA notice on behalf of a corporation. So I doubt it'll happen. If there were a special penalty, more of a slap on the wrist, but enough to inconvenience lawyers who sign off on these bogus claims, then I think that it would be, counterintuitively, more effective.
I agree, I am not saying the lawyers should be charged for perjury necessarily, just that they know the seriousness of what they're doing when they do this.
^Astrikes the portion of the law that says anything about responding to all claims "expeditiously", and replaces it with something that accurately represents the severity of the situation(s).^A
edit: severity in relation to an actual copyright violation, not possible damages imagined in the accusers theoretical situation if the content was not removed. Some companies might gladly eat a fine if only 1/100 people challenge their sick interpretation of the law and it keeps the bad PR from getting out. I could bore you with far worse scenarios but I will not aid the authoritarians with any further information.
The wording of the DMCA is irrelevant. All court officers take an oath to behave ethically. Submitting fraudulent documents because you're too lazy to exercise due diligence verifying the claim should result in meaningful punishment.
Without reviewing the pasted code it's hard to know if the claim is fraudulent, but I can assure you the DMCA is not the right tool for the job as-is. Using the word "expeditiously" implies there is some immediate threat that must be dealt with to prevent grave damages to copyright holder. In reality maybe some sections of code were pasted for critique, clearly not to steal property from (c)holder.
If only the most severe cases need to be handled "expeditiously" then MAYBE people wouldn't look at DMCA/DRM as a censorship tool. Right now though, its an automated nightmare which has no means for discriminating fair use of a small incomplete/unusable section of code vs someone posting full compiled binary.
Great for them if they can prove the claim was fraudulent, it can only aid in the fight to fix this busted law that has been abused repeatedly to harm competition and critique. We have to fix it now before the cops start using it to shut down streamers by playing copyrighted music at them (oh wait this IS happening, we need to fix this stupid law already...).
This is something that Student Unions around the country should pick up and launch protests against. Refusing the take tests administered by Proctorio and having very loud protests across the country would be a perfect way to drive them out of business.
I have a question for anyone who has seen similar situations. (I'm 35+ years old and have never work at a corporation except for some small gigs when I was 18, so I'm really ignorant of how these companies and internal decision making work)
Why in the world would a company sue a critic, under weak legal arguments, thus buying themselves this kind of terrible negative publicity? Who will trust/like/or want to be associated with them now?
I guess that this is to suppress criticism, but it must be expensive and difficult to do the same in every jurisdiction in which critics pop up, so it looks like a dumb strategy (let alone immoral).
I mean the guy seems to be educated and good citizen, works at a Uni, you can publicly read his views on twitter. THe kind of person I tend to sympathize with. Without knowing who Proctorio is, I imagine them being the kind of company I don't want to be associated with, just because of this.
I don't understand.
Backcountry is a generic term very commonly used in the outdoors community and features on numerous company names and products. The legal team of the online store backcountry decided they would sue any small company that had the generic term in their name.
It’s almost like suing any ‘cafe’ with ‘cafe’ their name after you start a business called ‘cafe’. Except this is the type of corporate stoogery that much of the outdoors community particularly loathes.
So yes the large outdoors company sicked corporate lawyers on small businesses who can’t afford legal fights to bully them into removing the very common outdoors term from their name. This caused a huge backlash and boycott and is why I still refuse to shop there and instead shop at rei and others.
That incident shows how corporate decision makers can be completely out of touch with their customers, their market, and reality.
It's terrorism. You know that if they sued him, they would definitely sue you. He'll probably get his costs covered by donations in the end (after a lot of stress), but you won't.
In that way they also create a floor of wealth and/or visibility for people to realistically qualify to criticize them. Those people are more likely to be investors than critics.
It also proves to investors that they can defend themselves and are not risky.
This actually seems pretty effective to me. I now know that if I criticize Proctorio, I am likely on the hook for six figures in legal fees. That alone would be enough to scare off many people who would otherwise feel inclined to critique the product/company.
In fact, winning the case may not even be that important to the firm.
But for example in my home country, there's no way I could be sued for this. I mean I could, but if I was, the legal fees would likely need to be assumed by the entity making this kind of ridiculous lawsuit.
There are too many like me, in different jurisdictions, who cannot be silenced like this.
And I'm still in awe that a modern democracy like Canada, would allow their citizens to be threatened using their legal system. It seems corrupt.
Sure, you could scare some others into not criticizing you. But this case is going to be mainstream news now, while before their case only a small number of people would have read the tweets.
And I think losing this case could get very expensive if the defendant manages to convince the judge it's a SLAPP case?
Anti-SLAPP statues aren't available everywhere in the US (nor I think I'm Canada). Importantly, there is none federally. Although sometimes state statutes can be used in federal court.
>I now know that if I criticize Proctorio, I am likely on the hook for six figures in legal fees.
This fact alone should shine a very bright spotlight on Proctorio and by proxy, the educational institutions that force students to install it on their computers.
A lot of people will likely never have heard of this, and it's not like you or I as a student will have any choice in the matter anyway; it's not like we can choose to not use this software. And universities generally don't care.
They have nothing to lose, except a comparatively small amount of money.
Given that the CEO has previously gone on Reddit to bollock a student complaining and post part of the private support chat log, this may not necessarily be a rational decision. Sometimes just a single vindictive and nasty C-level person can set these things in motion.
After my years in the industry, I have to say that it at least feels like there's an overrepresentation of this in the CEO population. But I'm not sure whether that's statistically the case, or whether it's more because, as you say, a single vindictive and nasty C-level can set these things in motion without necessarily any checks and balances to stop it -- and that more over, it's really really visible and memorable when it happens.
To be honest in my experience a lot of people tend to be kind of vindictive. See some of the responses to the Linux bogus patch research for example, with people calling for things like "destroy their careers". That seems a bit overly vindictive to me; yes, they screwed up and yes, there should probably some consequences. But destroying the entire career of a 20-something? Sjeez...
I've gotten death threats over email because ... reasons? These are of course entirely hollow threats, and it's easy to just shrug them off as "assholes internet crazies", but there's no reason you can't be crazy or an asshole and have a successful career.
Add to this that a lot of these people put a lot of hard work in these kind of comparatively small businesses and that criticising the company feels like an attack on you ... and you end up with this.
The difference, as you say, is that most of us aren't actually in a position to enact these kind of things.
Maybe there's also some bias towards certain personality types, I don't know.
SLAPP suits (acronym for Strategic Lawsuits Against Public Participation) are designed to shut up critics, plain and simple.
They're not designed to be won, they're designed to be an annoyance to those that are being sued, dragged on as much as possible, and incur as many legal fees for the defendant as possible.
The end goal is simply for other people's self-censorship to kick in. As in, when other people want to criticise Proctorio, there's a chance they're gonna stumble upon this lawsuit and decide against speaking up.
Highly recommend this John Oliver video. In my opinion, it's the best one they've ever done with an absolutely magnificent ending: https://www.youtube.com/watch?v=UN8bJb8biZU
So lots of people commenting that they don't have an issue with such software. Fine, but that's an entirely different debate.
This is about the company sueing someone for criticising the software, by detailing how it is supposed to work. So even if we accept that we need such software, do we really want to go down the path that software companies can sue people for talking about how it works and criticising them?
Note also that the original lawsuit was for Ian tweeting about their public YouTube videos.
How can this garbage lawauit be allowed to go on and cost a man $100.000 in lawyer fees??
Because the rules are made by lawyers
Maybe... But in the US we have anti-SLAPP laws written by lawyers in a lot of states. Any California court would toss this shit out in a second.
By lobbyists*
and because it's in US.
Small correction: he’s being sued (by a US company) in Canada, under a new Canadian law.
Aha, that might change it indeed. Do they have the "loser pays the costs" policy or not?
Kind of.
According to the EFF [1], he might be able to recover his legal costs: “ But Proctorio’s bad behavior has inspired a broad community of people to fight for better student privacy rights, and hundreds of people donated to Linkletter’s defense fund, which raised more than $50,000. And the PPPA gives him a greater chance of getting his fees back.”
1: https://www.eff.org/deeplinks/2021/02/student-surveillance-v...
SLAPP suits are absolutely a thing in the EU as well.
Agreed the problem here is the lawsuit. They should have just ignored this guy and gone on with life. Now the they got the EFF to deal with and it serves them right.
On the software side there is no easy way to do fraud auditing and that's that. I just recently took a professional PeopleCert exam online with a guy watching me through my webcam in a closed room I had to display in advance etc and temp installed some invasive application. So what ? If there was a better way to deal with the unfortunate fact that some people will cheat then I would be all for it but just getting all shook up about a temporary set of specific restrictions for a singular type milestone event is a little unrealistic in my book.
I haven’t seen anyone commenting in defense of Proctorio. To be honest, Proctorio seems indefensible. I hope this student makes bank.
To your second point, the US legal system is extremely litigious, so I don’t know how you turn that ship around. I agree that this should be thrown out as a frivolous suit and the student should be compensated for damages— legal fees, time, psychological stress, etc.
Might have been different hours ago, but now I see plenty of comments saying "but it's necessary".
I think the bigger issue should be how the University came to choose something like Proctorio. This has been the case with many colleges around the world; disconnected or ignorant authorities keep signing up to whatever the SaaS salesman pitches to them, without vetting the company's background and records, and not listening to the affected parties in their arrogance.
My own college struggled with online exams, and turned a deaf ear to students and even professors objecting against the platform being used (it's the most popular proctored exam platform). In the end, due to their own incompetence at handling exams, there turned out to be large-scale cheating and the college then forced the entire batch to give the exams again in the next term.
The problem was that COVID lockdowns took many schools by surprise, and instead of making exams designed for online platforms, they tried to fit regular into a online platform.
Turns out, it's incredibly easy to cheat on those exams.
There's probably a market opportunity to replace testing w/ something better. Some actual measurement of competence that's win/win/win for the students, instructors, and universities. Maybe a company come up with an interesting idea & get funding YC to scale the solution to that hard problem!
Related past threads:
Students of color are getting flagged because testing software can’t see them - https://news.ycombinator.com/item?id=26745582 - April 2021 (21 comments)
Student Surveillance Vendor Proctorio Files SLAPP Lawsuit to Silence a Critic - https://news.ycombinator.com/item?id=26250800 - Feb 2021 (40 comments)
Parents demand academic publisher drop Proctorio surveillance tech - https://news.ycombinator.com/item?id=25506007 - Dec 2020 (106 comments)
Proctorio used DMCA to take down a student’s critical tweets - https://news.ycombinator.com/item?id=25002730 - Nov 2020 (116 comments)
An ed-tech specialist spoke out about proctoring software. Now he’s being sued - https://news.ycombinator.com/item?id=24872084 - Oct 2020 (6 comments)
EduTech Spyware Is Still Spyware: Proctorio Edition - https://news.ycombinator.com/item?id=24450248 - Sept 2020 (9 comments)
I've got to wonder how much child porn is on Proctorio's servers and how they deal with it. My guess is 'negligently'.
Do they have systems to catch or prevent pervy teachers or proctorio employees who repeatedly watch recordings of students wearing revealing clothing, having wardrobe malfuctions, engaging in sexual activity without realizing they're being recorded, etc.
How many pedofiles has proctorio caught? If it's zero, I don't think they are looking at all.
Recently, I had an exam through ProctorU and thought I'd try to reclaim some privacy using Windows Sandbox when I learned that they utilize TeamViewer to take full control of your computer. This was not allowed only because parts of the control panel were disabled and they couldn't verify that I had only one monitor. I used a mirror to show them my laptop and desk but that was not sufficient. Spent a lot of time that weekend just trying to take the open-book exam.
I really hope that universities will consider their students before adopting this type of software.
I’ve sent an email in regarding a CCPA request and got this response.
“ Hello,
Thanks for reaching out! I'm following up on your request. I want to let you know that no one at Proctorio has access to your information. Only authorized personnel at your school (Instructors or Administrators) can access any of the information collected while taking an exam.
I'd be happy to discuss this further with your instructor if you would like to connect me to them.
Best,
Josh”
I personally believe education should just move forward into a world where cheating is impossible. Hands-on schools like Olin College of Engineering make it practically impossible to cheat... because you actually have to build something! Sure, plagiarism is still an issue, but that's much easier to control for than monitoring students while they take a test.
I don't disagree, but the issue here is one of scale and costs. Testing is just easy, cheap, and very ingrained to boot. A single teacher can teach dozens or hundreds of students.
Olin charges $55,612/year tuition fees alone; the total costs are estimated at $79,024/year[1]. This is a lot higher than a lot of other "normal" universities[2], and even seems higher than most "top universities"[3].
I'm sure it's great if you can afford it, but most can't.
[1]: https://www.olin.edu/admission/costs/cost-of-attendance/
[2]: https://uscollegeinternational.com/2019/10/03/cheap-engineer...
[3]: https://studyabroad.careers360.com/articles/engineering-in-u...
Olin provides a 50% scholarship for everyone and does really well at financial aid. I currently go here :). In the beginning of the school's lifetime, everyone had a 100% scholarship. It's a common tactic for schools to say their college is the same price as MIT/Harvard but not actually cost that much.
Plus, many schools are moving towards a co-op/hand son/project-based model, even public ones. University of Waterloo is such an example, and many state schools I know are also transitioning into a project-based model. It's not that more expensive than testing - especially given the fact that everyone has a computer, 3d printers and workshops are relatively cheap, and most undergrad projects aren't that expensive.
Students tend to get internships, pay for their school, and move into high paying jobs immediately out of school because how much experience they have, even if they have to give up some theoretical basis. That being said, Olin generates many grad-school students that do well at research.
Proctorio did a sort of IAMA two years ago in reddit: https://www.reddit.com/r/AMA/comments/augdmo/ask_proctorio_a...
As a technologist I've worked in "online bullet loans", payments and other controversial verticals but for the life of me , I would never work in such a dirty business (as Proctorio). I'd rather tell people I work in porn-tech (where people are literally f*d) than this asinine proctoring systems.
Just... no.
It’s as if nobody has ever read George Orwell’s 1984: “It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself – anything that carried with it the suggestion of abnormality, of having something to hide. In any case, to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offense.”
They just now found the tools to enforce it.
> It’s as if nobody has ever read George Orwell’s 1984
no, they read it and think "how can we do even better"
Please don't post this sort of cheap flamebait crack here. It makes discussions poorer.
Orwell is the modern Machiavelli. In about 100 years, people will be wondering if 1984 was satire/criticism or an actual blueprint for effective government in the digital age, in the same way the real aim of Il Principe has been endlessly debated for centuries.
I came up with one of EFF Finland's t-shirt slogans: Orwell was an optimist.
Of course, I was soon outdone. It didn't take long for the adapted version to surface: Orwell was an amateur. Which, I have to admit, is an apt description of our times.
In contrast, Orwell's fundamental rejection of the totalitarian surveillance society is well known and documented. Even in 100 years, those who are interested can read it all.
The irony is, 1984 is used as a manual of sorts by some.
Unless, of course, such documentation will be surpressed. Which is pretty Orwellian, in fact, such operations are the protagonist's job.
Not if it gets Fahrenheit 451'd :-)
Machiavelli gets a bad rap. I read Il Principe and the primary message is moderation. I dunno how people contorted that into evil scheming.
Have been overseeing some exams being run at a well regarded university. No remote proctoring software (i.e. spyware) is in use. Students are taking their exams in the web browser of their choosing, on the platform of their choosing. They aren't sat dialled into a video call or anything else (but they have a link for a backup one in case they have issues or need clarifications on a typo etc.)
It's straightforward - a well-designed examination should allow for adequate distinction between students, allowing everyone (who studied the course and learned) to show basic knowledge, and those who have more advanced understanding to demonstrate this.
In designing assessments, questions were peer-reviewed to ensure they are not "easily googleable". They were designed to focus not on asking "what is X?" but on "tell us a way that Y could be achieved" or "give an example of how you would do Z". These questions are pretty hard to google, and time pressure makes it harder still.
Focusing on understanding, and applying knowledge really seems to be the right way to design an exam. I can say from what I've seen and heard from colleagues so far, this approach is giving equal (if not more) differentiation among students than usual - the good students still perform well, and the poor students still perform poorly. And personally (from experience), I find it a more enjoyable to sit an exam that asks you to answer meaningful questions, than one that simply expects you to memorise and recite facts back. We're not bothered if you memorise the name - just describe how you'd solve the problem.
From the number of people doing poorly even on fairly straightforward questions, I'm not hugely concerned that giving the exam online made any significant difference. A non-trivial number of students didn't even complete a mandatory question (which was clearly marked), so I assume they found it suitably challenging, even with access to the world's knowledge at their fingertips. I'd say that's a good exam.
My university forcibly pushed students into online proctored examinations with similar privacy-invasive software too, despite repeated concerns raised by both students and instructors.
As expected, it turned out to be a colossal failure - students found the remote "invigilators" didn't pay attention for jack shit and started cheating in exams, leading to the college forcibly bringing the entire batch back to their campus and taking all the tests again along with the ones in the next term.
I'm sympathetic to the concerned individual threatened and harmed by corporate lawsuits, but I don't think I share the underlying concern.
> In Linkletter’s view, customers and users were not getting the whole story. The software performed all kinds of invasive tracking, like watching for “abnormal” eye movements, head movements, and other behaviors branded suspicious by the company. The invasive tracking and filming were of great concern to Linkletter, who was worried about students being penalized academically on the basis of Proctorio’s analysis.
In an in-person invigilated test environment, the invigilator/proctor is watching students carefully for suspicious movements and behaviours. We don't call that an invasion of privacy. While I don't like commercial proctoring software (I have to use some, not Proctorio, for students who can't enter the country because of Covid-19), being videoed by a machine while you do a test is a) not much different from everyone else in the class who's being watched in person (you probably get watched less, actually), b) a way to make sure essential academic standards are upheld. No one is going to get penalised based on some fallible "AI" system; "suspicious" events get reviewed by a person and usually aren't suspicious at all.
"She then reviewed the flagged sections of the video with her university's dean, who she says agreed that her conduct throughout the test had been honest. This, however, did not resolve the situation. The dean, she explained, said she might still need to re-take the exam."
https://www.insider.com/viral-tiktok-student-fails-exam-afte...
That's the dean's mistake. This kind of software categorically does not detect cheating, and cannot be relied solely upon; only a staff member can make that decision. For example, I had one student using proctoring software whose father came into the room during the test, and I as the supervising staff member recognised the event for what it was and ignored it.
I don't much like such software, but I accept the current need for it. The problems people are describing seem to be faulty staff behaviour, not software.
The software will automatically block your screen, meaning you cannot finish the exam if it accidentally activates.
"I was on the second question of an exam composed of 45 questions and I got a black screen. I'm still waiting to see what's going to happen as this is my final......... I tried to access the exam but was locked out and when i went to the exam the chat option wasn't available either. I emailed my professor and unfortunately, he wasn't watching the exam at the time and couldn't do much to help me out."
https://www.reddit.com/r/UBC/comments/g2ub05/god_kicked_out_...
The software sounds faulty. The black screen is terrible UX; it should just silently set a bookmark and let the student continue.
The overall outcome (and the student's experience and stress) is ultimately a staff issue.
The professor should have been on call, and should have been able to get the student back into the questions, or, failing that, to offer another sitting. I have done both of these, among other remedies, and setting student expectations ahead of time is crucial for avoiding massively stressful problems (stuff like "If something goes wrong, stop and email me. I can add lost your time back and get you into the test again, no matter how long it takes to sort out. You won't lose marks from this." and actually follow through). Software inevitably fails sometimes, and it is just a tool. Staff need to make their own decisions.
You are correct this is a staff failure, the failure lies in using the software in the first place.
If we agree that the current situation is so exceptional that we need to fully monitor all students taking exams (and that's still a big if, I also am a university teacher, we don't use any monitoring system. There are other solutions as well such as taking oral exams...), one could simply hire a bunch of proctors who watch the video feed. Considering the savings that universities have made they could send one to everyone's home even.
Copy-pasting from my other comment:
> I see it as a currently-necessary annoyance, as the least bad option. The alternatives have greater deficiencies: human invigilators using Zoom etc. don't scale (institution experience); oral exams don't scale (my experience moderating such assessments); no invigilation leads to cheating (by few students but enough to be a real problem, especially for professional qualifications - my direct experience and institution experience); shutting down education until the pandemic's over is unfeasible.
Last year, we tried not monitoring, we tried Zoom invigilation, we tried orals. Lockdown rules prevented us from sending in-person invigilators in one semester (and we couldn't have hired even 10% of what that would take) and the other we invigilated in person like normal. We are spending more than ever before, so there are no savings. All of these methods failed. This year, grudgingly, we have moved to proctoring software.
I find your statements in different parts of the thread quite contradictory. On one hand you say that things don't scale because of number of students, but then a bit down thread you mention it's not worth making a new course for 10 students (out of 640), but for 10 students in person proctoring is clearly not a problem. You also mention the issue of cheating for professional qualifications, but again, that's a tiny fraction of all exams.
Regarding scaling of oral exams, there's actually some interesting research/calculations (I try to find the reference later) and the cross-over is somewhere around 150 students when oral exams become slower (I do think this is quite teacher and subject dependent though).
Also about budgets, I believe that your department is spending more than ever, the issue is property services departments of the universities should be saving large amounts. Considering lock-downs and staff and students working from home, maintenance cost should be way down. I suspect though that money lands in completely different buckets (don't get me started on the business of university property services, we pay rent in our overhead cost when we get grants that are higher than renting office space on the main shopping street in our city).
Well I'm talking about decisions about different things by different people 1) running exams (institution level - run centrally), 2) creating bespoke purpose-designed online courses (mostly department level). The decision about exam invigilation was made at the institutional level, where 10 is a small number of online students for a course, and they have dozens of courses to worry about. For most of the institution's courses, other solutions don't scale, and for the institution as a whole they definitely don't. The course with 10 online students may only run online this year, and it takes hundreds of additional hours to make a purpose-designed online course (we're doing that for other courses where the material will be reused).
The oral exams I moderated didn't seem like they could scale past 20 students. The reason our online students are online is almost always because they are overseas, and the majority of them have limited English. In that context, orals are discriminatory, stressful, and very slow.
I'm in NZ. Our universities have mostly been operating in person as normal (i.e. costing just as much) with far fewer international students (who bring in money), except in addition we have comparatively small numbers of resource-intensive online students as well, and are attempting to bring parts of entire degree programs online for the first time.
Hah! I did my PhD in NZ and became a part kiwi (I still miss it to be honest). I'm back in Europe now, in Sweden and as you might know we have had a very relaxed (and very weird from a non-Swede perspective) approach to the pandemic. So we did have in-person and online exams, but all with proctering via zoom.
The bar to use such invasive, buggy, user hostile spyware should be a lot higher.
Just because this solves your problem (it clearly doesn't) doesn't mean it's a good idea. Killing half the students also solves your problem.
You need to live in a world where this is not an option, period. What's the next solution then? People forget there are always more manual solutions even if they take more time...
> human invigilators using Zoom etc. don't scale (institution experience)
Could you explain further why this did not scale?
I'm imagining an invigilator watching video feeds using an interface similar to, say, that used by security guards to monitor surveillance feeds. I would think that a single human invigilator can monitor more students using this system than an in-person setting. What am I missing?
In person, a single proctor can monitor 200+ students; our eyes capture far more detail and field of view than computer screens, and our peripheral vision is tuned for detecting unexpected motion. With Zoom, either you have one meeting with everyone in it, in which case students can look directly at each other, or you have a separate meeting for every student, in which case you need a large number of devices, all visible at the same time. I can't see the former scaling past 49, and I can't see the latter scaling past 20. If you had some software designed specifically for this and several big screens, it would still be pretty hard to pay anywhere near as close attention as you can in person.
On top of that, there's always a bunch of annoying mucking about getting set up for Zoom invigilation: angle of camera, light, checking environment, etc. All that needs communication to and fro, and it can take up to 5 minutes for a single student. Now multiply that by say 100 (a bit of parallelism is possible, but individual communication is needed with each student).
So get more proctors.
Given the cost of college courses, schools could easily afford $5 per student per course, and $100 for proctoring 20 students for a single exam is more than fair.
As both a TA and a student during the virus, we did better and so can you. Stop trying to shoehorn your in person lesson plans and exams into virtual and design new material and tests. Dont be a shitty teacher just because you're too lazy to change your now dated ways.
Please make your substantive points without swipes. Crossing into name-calling and personal attack is not cool.
Please don't make assumptions or resort to baseless insults.
We have designed entirely new fully-online interactive courses in response to this.
In other courses, where we have e.g. 640 local students and 10 online, they need to sit the same test for it to be fair.
Sounds like you need to design an online version of that course if you're going to allow online students to take it.
While I disagree with the OP on the use of proctoring software saying something like "just design an online version of the course" displays a massive ignorance of academic workloads. Designing a new course, especially one which works online, is a huge amount of work, who is going to do all that? Most academics even in normal semesters work >50h (more likely >60h) per week.
Just adjusting your lecturing to online can easily take an additional 2h preparation per lecture, and often takes significantly more, because you have to also adjust all communications and how to make material available. That's even without redesigning the course for online learning. I know a lot of colleagues who got very overworked, because of the increased demand on teachers during the pandemic.
The expense isn't justified for 10 students, since this may well be the only year that course is offered online. They are not getting a bad deal out of it. The fully new online courses are first year, and the expectation is that students will come to study in person for the remaining years of their degree. The other course I'm talking about is second year, but those 10 students can't get into the country at present - a temporary obstacle.
> The expense isn't justified for 10 students, since this may well be the only year that course is offered online.
So instead you'd rather waste money on a barely-functional piece of software that actively invades your students' privacy?
It seems to function just fine (I'm not talking about Proctorio), and it doesn't invade privacy any more than the Zoom-based tutorials, which is barely at all. It is also better than any other alternative I know of, many of which have been tried already and found wanting. Privacy is not a god to whom all things must be sacrificed. All education involves losing a little privacy (in person people will know what you look like, where you're from, and many other things that are hidden for remote students; really they have more privacy than any others), and the fact that software is involved doesn't suddenly make things worse.
>the fact that software is involved doesn't suddenly make things worse.
When that software is required to be installed on private machines and is difficult to remove, yes, it absolutely does make things worse.
The software we use is easy to remove, and students have a lot of software installed on their private machines already, some of it more invasive in software terms (e.g. games with anti-cheating features). I have more qualms about the software and the privacy implications than the students do, and I'm resigned to it for now; the students just went ahead and installed it without comment.
There's two problems with this statement. First is the assumption that students don't care about privacy, second is the lack of discussion about consent.
I'm a student who takes special care about the software I install on my laptop. I use a Linux distro, run primarily open-source software, and sandbox every single proprietary app (limited access to files, no admin at all, no screen recording, disabled webcam, ...). I've also looked into several of these exam spyware tools (you really are forcing students to install spyware), and they're built with often hilariously poor security practices.
Which is to say nothing of the regularly stolen source code; If you held the exam spyware solutions to the same standards that you held students to, you would write up almost every single vendor to the Academic Integrity office. Another example of hypocrisy in academia from the perspective of a disgruntled student.
I deliberately do not install any video games with invasive anti-cheating functionality (and I regularly critique them, like I do for exam spyware); that is a false equivalence anyway, since they don't deal in the same breadth of personally identifiable information (like a permanently saved panorama of my bedroom).
Don't assume all students are the same.
Second, the consent dynamics are wildly different. For a game, its like "you trade this in for fun/relaxation" - and there's always other games that don't spy on you. I play those. With universities, many pulled a fast one and introduced the spyware to students after their tuition is already paid, and said "use it or drop the course". You can't switch universities because one university didn't consider the ethics of spyware; you can switch games much more easily.
Right, but that software was their choice to install, not forced upon them by their academic institution. It's good that it's easy to remove, but other people's computers are not your property.
Why should this be the only year; why not join the 2020s and do the later classes online in the future too? If the students did their first year remotely they've already learnt to learn that way, so it should work even better with second- and third-year students AFAICS.
I fully agree with you here that bugs in the software can be mitigated by having knowledgeable staff. But people can only be prepared for it if they hear about the issues beforehand.
That's why I find it so offensive that Proctorio is suing what appears to be university staff to silence them.
I am a lector at a University and would never force my students to install something that is so privacy invading as Proctorio on student's own computers.
If you absolutely need this type of monitoring it should be done at some sort of testing center on cleanly imaged computers. I could see it being done for something absolutely crucial such as a bar exam.
Proctorio and its ilk should not be used for simple mid-terms or even finals. This monitoring software becomes a huge crutch to lazy administrators and teachers. I should know I am lazy too.
PS While we are on the slippery slope, if Proctorio becomes standard we rapidly head to Snow Crash situation where federal jobs were heavily monitored. Already many freelancer sites use screenrecording software.
I hope that some security researchers investigate this software more closely and at the same time suspect that this lawsuit is primarily intended to scare away people from doing precisely that.
"Security researcher" here: Proctorio's "zero-knowledge encryption" claims were in name only, pretty much.
TL;DR Canvas and Moodle use incrementing integers for both user ID and quiz ID. Proctorio's "zero-knowledge encryption" has a shared key derived from the two IDs; they store the user ID, so that's effectively a single PIN. With their older settings, you can brute force a quiz ID in a couple hours at most.
They increased the time cost for the brute force to now take days/weeks, but that's still peanuts and the attack scales really well, because most exams take place at the same time (students start/end at similar times), so once you crack the quiz ID for one record, that's tens-hundreds of records; and since IDs are just increasing numbers, once you find the lower bound, working your way upwards is much easier.
They also added an option for universities to use PGP keys - but that involves training faculty, or manual setup.
For more details, here's my blog article: https://proctor.ninja/wave-rake-proctorio
> I could see it being done for something absolutely crucial such as a bar exam.
Why? How did we ever produce lawyers in the past without Proctorio?
The legal profession's dirty little secret is that the bar exam doesn't really matter. It's little more than a rite of passage and a good excuse for law firms to wait before bringing in the needy newbies before the summer associates have left. Cheating on the bar exam would present a serious character and fitness issue but not having the requisite knowledge to pass the bar exam would not at all impact a person's ability to practice law.
In person exams. In a pandemic, that can be a problem.
If you have a testing centre you don't need this kind of software, and if your students are scattered across the world due to Covid-19 travel restrictions, you can't have a testing centre for them. I consider invigilation absolutely crucial in a professional engineering degree in mid-terms and finals (engineers are more likely to hurt more people than lawyers). I don't think this kind of software is a crutch for the lazy; it's more work to organise and manage than human invigilators, and doesn't scale as well.
I don't think it's a slippery slope. We will have significantly more local students than remote for the foreseeable future, and the local students are invigilated in person. We have students wanting to opt into remote study, including using this software, but so far if they're local we tell them no (except under extreme circumstances).
We will have to agree to disagree then. The whole term invigilation is a bit dystopian.
It is a huge slippery slope because this privacy invasive software ends up being used for such mundane things like weekly quizzes.
Do you really want to live in the future/present where we have to submit ourselves to daily monitoring?
How about adding some monitoring via sensors of performance boosting substances?
My answer is that such monitoring software should be only used under exceptional circumstances on neutral computers - my example was a testing center with insufficient human supervision.
I think you're extrapolating way too far, and hence your arguments aren't hitting the mark. The total amount of invigilation in the entirety of each course I'm involved in is 3-5 hours. This has been the case for many decades. This year, for some students, it is aided by software, due to the fact that they're unavoidably thousands of kilometres away. It is not a slippery slope, and none of your dystopian ideas seem like they could result from it.
This kind of software does not monitor computer use outside of the test. It monitors the student during the test, like a human invigilator (yes, it's recorded, so not identical). During the test, the student is only working on the test, which is not private or secret. When the test is finished, the software exits.
I don't see how you could have the resources for an infrequently used testing centre, but not enough to pay one person for an hour or two. That is not what this kind of software is useful for.
> During the test, the student is only working on the test, which is not private or secret.
You fail to consider the circumstances in which the test takes place. Students take the test in their personal spaces, and earlier in the thread, you mentioned essentially inspecting a student's living space (...angle of camera, light, checking environment, etc...) "Checking environment" is really just a cold, "process" word for inspecting a student's living space.
A student's room can often have private or secret things about them. Before you ask, not every student has the privilege to use a separate, clean, blank room to take tests. A personal space is inevitably going to have personal, private things. I've brought this up before; I personally know friends who were outed to professors as trans because their personal space has things like needles - and then you even have stuff like naive professors assuming "drugs" when its really just medications.
It could be anything else besides that, in fact - calendars with things scribbled on them; family photos; posters for political organizations; if you look in someone's bedroom, you're inevitably going to find out things about them that they would rather you not know.
Would you take your students on a tour of your bedroom while you're teaching an online class?
EDIT: In addition, there's non-traditional students and high risk students, and interruptions in general - there's not _only_ a test going on - I've had someone from my family interrupted in the middle of an exam because someone from the government knocked the door to take our temperatures and ensure we're healthy and don't have COVID. There's always more things going on, too.
If your software is ethical, why smear your own name by bringing a spurious lawsuit against a singled out individual like this?
There are real concerns around privacy with software like this - what will happen if some criminal finds an exploit in Proctorio's software? This kind of monitoring software is often designed to be hard to detect or bypass, and installs itself in ways that are challenging to remove. Development of this kind of software should be done with a high degree of caution and care. There isn't a risk that an invigilator will be compromised by a malicious actor, and then see every detail of students lives and all their private data for all time. But this risk does exist for software.
If Proctorio takes legal action like this against critics what does that say about their software?
Agreed, the lawsuit is a terrible thing to do full stop, and doesn't seem to make business sense either. Maybe they think his critiques are worse than the bad press.
I agree about the privacy concerns, and in normal circumstances I wouldn't use proctoring software, because all my students would be local (or they would have signed up as distance students knowing this kind of thing would be needed). I would also have much less issue with it if it was open source.
I see it as a currently-necessary annoyance, as the least bad option. The alternatives have greater deficiencies: human invigilators using Zoom etc. don't scale (institution experience); oral exams don't scale (my experience moderating such assessments); no invigilation leads to cheating (by few students but enough to be a real problem, especially for professional qualifications - my direct experience and institution experience); shutting down education until the pandemic's over is unfeasible.
> no invigilation leads to cheating (by few students but enough to be a real problem, especially for professional qualifications - my direct experience and institution experience);
How is it a real problem (I've never cheated but always felt I got fair grades, not affected by whatever cheaters did)? If a student memorizes everything they need for the exam today, but forget all of it tomorrow, is that useful? That's a very real thing happening every day in exams, and nobody highlights it as as big a problem as cheating (I think it's a bigger problem actually, but the solution is not necessarily in finding those who can do that, but in optimizing the material so memorization is not such a core part of it before you start specializing).
In careers where such things really matter (think medicine), people only progress by demonstrating actual knowledge and understanding while gaining real-life experience. We don't get a surgeon out of a medical school that hasn't shadowed a surgeon and been quizzed on things to do next.
It's similar with engineering: you are given smaller, simpler things when you start off, and you build up your knowledge and experience before being given the role of a lead engineer for an airplane engine.
Exam grades are never used as a measure of someone's knowledge on the topic, though a paper they authored, or their dissertation, naturally, might.
It's a problem for institution reputation, and for remaining accredited to confer professional degrees controlled by external organisations, like engineering degrees.
> I see it as a currently-necessary annoyance, as the least bad option... shutting down education until the pandemic's over is unfeasible.
Is it necessary though? Have you considered there are ways other than testing for a student to demonstrate their knowledge on a subject? Projects, presentations, and writing all come to mind as effective ways to measure knowledge on a subject and do not require treating all students like cheaters because a few choose to do so.
Yes, but my subject is maths :-). 1st and 2nd year engineering maths don't really have projects, presentations, or writing as options, as we mostly care about whether they know particular fundamental mathematical techniques and skills. All those options also have the problem of knowing who did the work.
From talking to remote students, I don't think they feel like they're being treated like cheaters. Instead, they seem happy we're making their study possible, and accepting of what they're asked to do. They know it's important that they can demonstrate unequivocally that they have particular skills.
My sister is still in school and the anti-cheating software gives her a lot of anxiety - not that she's a cheater or anything, but because it is well known that this software flags non-cheaters as cheaters. For example, she is not allowed to look around or talk to herself while working on a problem, both of which help her to demonstrate her knowledge effectively. If the goal is for testing to demonstrate a student's knowledge, then employing techniques that hinder a student's ability to do so in the hopes of catching cheaters is counter productive to the original goal. After all, you want to know if she can apply fundamental techniques and skills - not whether she can apply these fundamental techniques and skills while behaving under a very strict set of rules.
Even in early level mathematics, there are plenty of opportunities to introduce word problems that can only be solved by applying the relevant techniques. As long as the teachers are defining these word problems themselves (rather than pulling them from an online resource), they stand as a pretty good guard against cheating since they require students to first recognize the technique that needs to be applied, and then to extract the relevant variables from the word problem to apply that technique.
Furthermore, in early level mathematics, you can still have students present solutions to problems and explain why the solution works. For instance, say you were interested in whether or not a student has grasped the basics of derivatives - simply get on a call with that student, give them a random function to solve the derivative for, and then have them do so in front of you.
These are all things I've quickly thought of that would have at least be partially effective in measuring knowledge. I imagine any person with a career dedicated to instructing students could come up with many more options that could be even more effective.
> Instead, they seem happy we're making their study possible, and accepting of what they're asked to do. They know it's important that they can demonstrate unequivocally that they have particular skills.
Students are happy to be able to study and know that it's important to demonstrate their skills - but that doesn't mean that they wouldn't be happier if they could demonstrate their skills without the invasive testing software. I argue this is setting up a false choice: "you can either learn nothing at all, or do so under this cheating software". But the reality of the situation is that they can still learn and demonstrate their skills without the cheating software.
Isn't Dominion suing Ted Cruz?
As far as I know they're suing Fox News, Mike Lindell, Sidney Powell and Rudy Giuliani for defamation.
Not sure what your point is, but the difference between "dead president Hugo Chavez flipped votes for Biden" vs "Here is how the software works, I don't think it's right" is quite important.
You make two assertions that are both clearly incorrect. Your point a) there clearly is a difference between watched (not filmed) in class and a software, filming you, recording audio, monitoring everything you type and what your computer is doing _in your own home_. Saying there is no difference is akeen to stating a police officer watches you in public, so let's install cameras in everyone's home. b) you say that nobody will be penalised by based on an AI system, despite there being lots of reports of exactly that happening (some where even posted a bit further up thread).
What the invigilator sees, stays in his/her head. They are not recording things with their eyes; this software is doing that (I'm not sure of the extent, how much is client side vs server side, anonymization levels etc). However, I'm making that assumption based on the fact that if a person disputes something, a recording of the event would be required to ascertain the facts. That kind of information can be damaging in case of security breach; e.g. someone could have a habit of picking noses when lost in thought or changing clothes during the exam (they might reasonably do that in their home and not in a public exam with invigilator present). If such videos leak due to security breaches and catch circulation on internet, the people in them can be subject to ridicule or invasion of privacy online.
People can have different expectation of privacy at home versus in classroom/testing facility.
> being videoed by a machine while you do a test is a) not much different from everyone else in the class who's being watched in person
> "suspicious" events get reviewed by a person
Being taped and having that tape scrutinized after the fact is not equivalent to being in the same room as an invigilator. I could accept a live video call as being roughly equivalent, but not if it’s recorded.
It's their personal computer though.
But i think it's OK to install the software just on a laptop you'd use to take tests, some old laptop, and keep yourself free of surveillance otherwise right? If so, i can't see how is it a problem: someone absolutely has to be watched while taking a test. But if it's during whole education process, then yeah, it's ridiculous and worth fighting against.
But i think it's OK to install the software just on a laptop you'd use to take tests, some old laptop, and keep yourself free of surveillance otherwise right?
That's a fairly classist assumption that someone has multiple devices, including some older piece of junk that's late model enough to be useful for test taking but still essentially a "throw away." It implicitly means that adequate right to privacy is only reserved for wealthy kids and not for anyone who only has one computer.
That's not to suggest that I support this assumption even in cases where it's true. I'm just trying to point out a common blind spot that leads comfortably well-off people to often act with callous disregard towards those who have less because it isn't a big problem for themselves.
For some people "some old laptop" is all they actually have. I don't imagine a lot of students have both a desktop, a laptop, and "some old laptop" that they can use, so the software will get installed on the single computer they use (or share) to get their studies done on.
Not everyone has an old laptop lying around they can use for such a purpose.
The software is only running during the test. Most students install it on their main laptop, and they don't seem to have much problem with that. It runs with elevated privileges (so it can make sure it's the only program running), and it's proprietary, both of which I have objections to, but unless I'm missing something and Proctorio is quite different from the software I've used, I don't see much problem with it. In Covid-19 times everyone puts up with things they don't like.
OK then at all, i can't see the problem. Unless there is a reasonable suspicion that it's malware and infects your machine with something that still keeps tabs on you when it's as if not running.
it's pretty unrealistic to expect everyone to have a spare laptop around.
Assuming you have enough RAM, I guess you could run it in a VM.
Programs like this usually attempt to detect and forbid being run in a VM.
instead, an actually workable solution would be dual booting. that only needs some disk space.
only downside is that it's a bit tricky to set up if you want to dual-boot two windows systems.
How's universities are not yet part of class action lawsuit for gross abuse of privacy, students rights and segregation due to the personal characteristics?
It's endemic. I think critical mass at this point would just be a single case such as this one to catch fire in the national media. Something is missing from the equation and my guess is maybe deep pockets or familiar actors.
There was a time when calculators were not allowed. The solution was that since people can always have access to calculators they should be allowed to use them. I think that should work, if a student can use a computer to answer a question he can always use a computer to answer that question. It should be encouraged to do so, the school should make an effort providing- and keeping the answers online and available for download.
I remember a prof telling the class "if you can figure out how to program your calculator to do this then go for it". I learned how to program my calculator in that class and not much else.
this clearly doesn't work because most test questions for students are trivially Google-able. So then tests as they currently exist would not be able to differentiate someone who has studied and internalized the material from someone who Googles quickly. Tests would have to be re-written to require a lot more creativity, which is not only hard to do as a test writer, it's harder to grade and ultimately for students, requires even more effort (compared to rote memorization of facts or rules). So I don't think most students or instructors love this idea.
Yes ofcourse, you would have to adapt the test. Make it more goal oriented. What are we testing for anyway? Ability to do something? Looking up or memorizing things is different from applying them.
It just struck me that memorized facts and rules are extremely hard to update.
What a bunch of fuckheads. Donated.
Btw how does the system work in Canada, after he wins this, will he be able to sue for damages and strip Proctorio clean or what?
I wonder if anyone ever reverse-SLAPPd the humans behind the SLAPP suits.
I want to have faith in the justice system to eventually bring this company to heel -- to believe that what they're doing is not just ethically wrong, but also in severe conflict with the law and liable to open them to significant litigation risk. Any practicing lawyers here that have thoughts about this?
His legal costs fundraising page is at [1] and gives come background to the harassment by Proctorio.
Here's an odd copyright thing I came across recently .. this video (link below) has been marked as having a song when there is absolutely no fragment of the song in the video.
The video was used for Forever Young by Youth Group. But this is the original footage, from Australian TV, from 1976. Waaay before even the original Forever Young, by Alphaville in 1984.
I went to record a complaint but you have to be the owner, so I threw it in the too hard basket.
I'm sorry but lock the computer down and have a proctor. It's worked for hundreds of years. Teaching younger people that it's okay to be surveilled 24/7 is frankly BS and we need a new Digital Bill of Rights as of yesterday. Humans deserve dignity, if you treat everyone like a criminal they start to feel like one and drive up their anxiety levels Also fuck Proctorio and its ilk
If anyone here works for Proctorio. Please quit. Just quit.
Looks like they outsource most of their company to Serbia, so I doubt many will see this.
So many organizations you could say that for. They're probably paying people enough to keep them "happy".
s/Proctorio/Scientology/g would work just fine in the story https://www.theverge.com/2020/10/22/21526792/proctorio-onlin...
Of course there are a lot of creepy software and sleezy businesses around. That is not the issue. The issue is the university administrators - the MBA style morons who overtook the universities - who subject their students to such a crap software. On the other side one can argue that that is really preparing students for the real life - after all they are going to come to the industry and will be subjected by the MBA style morons in the management there to the crap like Jira, Scrum, Slack (how being forced to constantly broadcast your status and be immediately responsive is that much different from Proctorio?), etc.
So proud of the EFF for stuff like this and it's why I'm a monthly donor.
Another lawsuit: https://www.eff.org/press/releases/eff-sues-proctorio-behalf...
tl;dr: someone examined a bit of the JavaScript code. Proctorio DMCA'd it.
Does anyone have copies/mirrors of the deleted Pastebin entries?
Does anyone have a link to the tweets and shared videos?
You mean the ass company?
How can we help? Is it donations? I couldn't see the donation link.
Proctorio is on my red flag list.
If any current or former employees of a flagged company apply for a job on my team I expect their reason for leaving to be for ethical reasons, or for them to tell me about how they had ethical problems with what the company was doing. If they don't I immediately remove them from the candidate pool.
I'm conflicted about this.
I agree with you that I think working for this company and not having an ethical objection is a red flag.
But the interview environment is fraught. Is it really obvious that no candidate would think "this question is a trap"? There's a lot of interview advice that says your answer to this question can only ever hurt you, so be as bland as possible.
Some employers want compliant employees who don't rock the boat. Some employers will hear complaints about proctorio's software and hear "SJW shit".
I don't want to work for those kinds of employers, but I sympathize with people who want a job and might fear that their honest answer could be used against them.
Ultimately, I think I have to come down on this policy being a bad idea, however understandable.
Yeah it's a tricky area. In multiple cases I have been tempted to badmouth someone or a company in an interview type context, but have almost always been glad that I avoided it and just found a more subdued (but still honest) way to express my feelings.
> There's a lot of interview advice that says your answer to this question can only ever hurt you, so be as bland as possible.
Seems like terrible advice to me. If someone gives me bland answers I'll assume they're a bland person.
> Some employers want compliant employees who don't rock the boat.
Good for them, I do. I want people who are willing to stand up for what is right and say "no".
> Good for them, I do. I want people who are willing to stand up for what is right and say "no".
See, the problem is you think you're selecting for ethics when in reality you're just selecting for how easily they can read you and the rest of the room.
That's all interview questions.
> If someone gives me bland answers I'll assume they're a bland person.
The issue is that plenty of interviewers are bland.
There is no upside to speaking ill of your previous company. In no particular order, your potential employer might think...
... that you'll badmouth them in the future too,
... that you are the type of person who will cause drama by speaking behind other people's backs,
... that you don't know how to accept and recognize your mistakes ("it's not me, it's them")
... that you are not as nice to work with as any of those other candidates who didn't badmouth their previous employee when given a chance
... and so on. The interview process is biased enough against you to be worth the risk, specially when there's so little to be gained from it.
I was an early employee at a Proctorio competitor. We actually beat them to market but they managed to get more market traction than we did.
In complete transparency, I left because I didn't get paid enough - not because of ethical reasons.
Just-out-of-college me was sold hard on the argument: "Good students who work hard and don't cheat are getting screwed over by bad students - cheaters. There is no solution for this in online classes. We need to build one"
In hindsight, it is crazy how long a simple argument, being ethical itself, can rationalize other shitty decisions and compromises. It's kind of like a religion or a cult when you are in it. Constantly surrounded by other people who are drinking the same koolaid as you. You might feel a little uneasy, but hive-mind grabs ahold of you and when you see other devotees to the company vision, it soothes that uneasiness. Which no...is not good.
It wasn't until I was no longer an employee that I saw things much more clearly. In the "name of justice", we built an unethical product that fucked over many students. Plain and simple.
You know the phrase.
The road to hell... Good intentions...
Unless those ethical concerns directly impact your own team, then that's deeply unprofessional and unethical on your part, and you should reconsider your priorities.
Who appointed you moral judge of others? You discriminating against candidates based on their moral views unrelated to their work is no different from discriminating based on religion or political party affiliation.
It is legitimately within your purview to ensure that that a candidate doesn't have moral objections to the work they'll be expected to do, as well as look for legitimate objective red flags such as previously embezzling from an employer.
But the idea that you'd appoint yourself as some kind of moral purity guardian is deeply objectionable in a world where people legitimately disagree in good faith over ethical issues. You should re-examine this.
> Who appointed you moral judge of others?
Well, they got appointed the judge of others when they were put in charge of figuring out if it would be a good idea to hire someone for the company.
Having moral red flags is a perfectly valid thing for a company to be concerned about. Immoral employees are at risk of doing bad things, and can hurt the company.
> based on their moral views unrelated to their work
It is pretty related to the work though. It is directly about the moral decisions that they made, while at work. Specifically it would be for working for that company that is doing immoral things.
> Unless those ethical concerns directly impact your own team
Privacy issues impact every team. The threat of SLAPP lawsuits impacts every person willing to exercise their freedom of speech.
If someone has worked for a company that does privacy invasive unethical things then it's reasonable to assume they'd be fine with doing privacy invasive unethical things somewhere else.
It's up to a company to set its own policies around privacy and ensure employees follow them.
The fact that an employee worked somewhere with different, or opposed, privacy policies, doesn't mean they're unable to follow yours, any more than the fact that working at a company that used a 4-space-width tab to indent code means they're unable to follow your guideline of two-space indents.
What does it matter if an employee would be fine doing what you consider to be privacy invasive things at another company? If they can do the job you expect them to, following your company's privacy policies, then that's all that matters. Otherwise, their personal moral compass is absolutely none of your business, and it's offensively paternalistic to suggest otherwise.
It's unethical to hire people who align with your (and your company's) values?
To only hire such people, or give them preference? Of course it is.
This is precisely why hiring for "cultural fit", very often also called "company culture" or "company values", often leads to racial or other discrimination. Because different groups hold different values. E.g. white guys from expensive colleges hiring people who "align with their values" can result in deeply discriminatory behavior -- racially, culturally, socioeconomically, etc. -- even if that's not the intention.
Hey, don't you know that "culture fit" means hating all the same things that I hate?
I don't think that's entirely fair to the candidates. I don't know your stance on Proctorio and I'm not going to air dirty laundry to an effective stranger. And any company or hiring manager who tries to bait me into bad-mouthing a company I've worked at gets a red flag from me.
Sure it is. My first job outta college was for a defense contractor. It's perfectly fair to ask me why I left, or even more pointedly, "how do you feel about working for a defense contractor?"
It's not an invitation to badmouth, it IS an invitation to discuss the ethical concerns involved, and why you found it acceptable (even if it's just that it was a meal ticket until you found something you objected to less)
Out of curiosity, how _do_ you feel about having worked for a defense contractor and, separately, working for one again? What tradeoffs have you experienced (good and bad) moving from defense to non-defense industry?
I'm personally beginning to think there is a crossroads coming up for me and would value your perspectives if you have time to share. Thanks!
So this was about a decade ago and only a couple years of my life; I really wouldn't want to misrepresent my experience at the time as being reflective of what it's like now. So I'll just touch on the things that likely are the same now.
The bidding process for defense contracts means a lot of Big Design Up Front, and an inability to change things easily once signed, means that though there are attempts at being agile, they likely involve only the technical delivery side (CI/CD pipelines, automated testing, etc), not the interaction with product and other stakeholders (no fail fast and pivot, real MVP, etc). I know there's been some attempts at moving in this direction, but I honestly can't see it happening, since it's innately at odds with the short term incentives that Congress has (and the Pentagon operates under).
The place I worked at had decent perks, but nothing like Silicon Valley tech companies. We had a cafeteria and a Starbucks in the building, but neither was subsidized. Vending machine snacks, again, at cost. Pay and benefits were reasonably competitive for the area as best I could tell (but didn't look that much; when I was ready to move on I wanted different geography as well).
General culture was fairly laid back; only the month or so before something was due did it feel crunchy, and then only for certain people. A lot of dead weight, but a lot of job security, especially for those who delivered.
The projects could be interesting. I worked on a distributed data visualization system that got me exposed to the CAP theorem back in 2011ish, which helped set the trajectory of my career, though I didn't realize it at the time. I also encountered people who said, half jokingly, that their skills had stagnated to the point they weren't hireable elsewhere. I doubt that as true, but certainly at the time there was work on technical things that didn't translate outside of defense (but many of the skills did even if not the technology).
In terms of would I go back to the defense industry? Probably not if I have a choice. While there are a lot of problems in public sector tech companies too, and many with the same issues as defense contractors, there are more of them, and the constraints placed on them tend not to feel as daunting or arbitrary as some of the ones placed on defense companies. Ultimately the impression I got was that defense companies tend to be very stable and predictable, and I just don't appreciate that as much as I appreciate being able to suggest changes and seek improvements.
Approaching this the other way - stepping into my first public sector company, I got to appreciate being agile, actually working with stakeholders to understand and address their needs directly. I got to make technology choices with the team based on what would address the problem the best (and that we were interested in supporting) instead of being told what they were based on what the contract said. And I got to work on smaller more focused teams. Longer term, I've gotten better compensation I feel like (though haven't directly compared; is there more than just a ~15% bonus offered at defense companies now?), the feeling of a lot more things 'done', and the ability to change jobs (without having to change geographic locations) when I felt like I needed a change.
There's a difference between bad-mouthing your employer and discussing ethical issues that are public knowledge.
I'm not sure whether I can take this statement at its word. If that's the response when questioned about working at a company whose entire purpose is purportedly ethical, I would strongly reconsider your position. There are many corporate cultures where this kind of apathy and whatabout-ism would be considered a red flag. That would certainly be the case for any process I've run.
Let me put it this way, asking why you left an employer is fine. Expecting a certain answer for a subjective situation and rejecting an applicant because they didn't give you exactly that answer is not exactly a healthy behavior.
I'm here for a job, not to play weird mind games.
I'm not sure why it's so hard to say "Yes, company X did a lot of things well (Y, Z, AA) but could have improved in sectors AB, AC, AD." Actually, I do understand why -- you may be risk averse, and the fear of losing your job or rocking the boat precludes you from making a critique even if other people get hurt. Maybe in part because you too have mouths to feed and folks who will get hurt if you do so. So you downvote and make up excuses for it. After all, that's easier than addressing the cognitive dissonance, no?
It's understandable, but I still can't agree that it leaves you without some amount of ethical culpability. Maybe significantly less than an executive. But still, some. It's more understandable for roles that don't have as strong a position in the labor market as engineers, but I find it a little bit less so for myself, as someone who works in engineering.
I think you (and anyone else downvoting) should read Eichmann in Jerusalem [1]. It's about this exact ethical quandary. I would hope it would change your opinion on these things, but if it doesn't, agree to disagree. And certainly don't expect any sympathy from me or the rest of society.
Then why ask? All job interviews are weird mind games. My assumption when I'm asked why I left my last employer is that they're looking to second-guess my other answers based on what I say, or checking to see if I'm a disagreeable person who is willing to badmouth somebody generous enough to employ me. This is just taking the latter in the opposite direction.
There are plenty of reasons to move employers that doesn't involve saying that company you were working for is unethical. You could say that you've reached your ceiling and are looking for the next step up. That's a fair one. The company you are leaving may not be able to promote you. You're looking to work on new projects and your current company is just maintaining and/or adding features. That's also fair.
> checking to see if I'm a disagreeable person who is willing to badmouth somebody generous enough to employ me.
Ah, but you see, the guy I originally replied to is checking to see if you're willing to badmouth companies he dislikes. And if you don't, he will dismiss your application. It's an unspoken rule you can violate without ever knowing you're in violation of it and has a huge affect on the outcome of the scenario.
No reasonable interviewer is going to put you in the position of having to cast aspersions on a previous employer. That's a minefield for all sorts of reasons.
> No reasonable interviewer
Not convinced you have much space to declare some global idea of "reasonable interviewer" although you're more than welcome to complain about you personally not getting an offer from a company that clearly doesn't share your values. Your words say "badmouth" but that's just a way of hyperbolizing the ability to be vocally self-critical, which is necessary to learn lessons and achieve actual growth as a company.
Most startups have to do this, in my experience. You can't experiment without it. If you can't identify one weakness in your prior employer's model (whether ethically or functionally) or you'll get huffy about talking about that in an interview setting, why would I trust you with the responsibility of being in my organization? I'm going to get to those answers eventually. Why waste your time?
Looking back at history, do you think they called them the "traitorous eight" [1] because they were "reasonable interviewers" who would never "cast aspersions on a previous employer?" No. The history of this industry in the USA is based on vocal critique, competition and paper chasing in the pursuit of out-executing your competition.
If you don't have that culture of critique and open communication, it will be very hard to become #1 in whatever you're trying to do. Maybe that will never matter to you and the companies you join. Okay. If that works for you, great. But I'm a career startup person, so it does matter to me. I would literally /never/ have gotten anywhere without being able to earnestly note what my previous employers did right and wrong, and use those learnings in my next opportunity -- often very vocally.
Now you're being condescending and projecting things onto me simply to try and undercut my point. Try talking to people without calling them huffy or assuming the people you're talking to are stupid. Maybe they just have different opinions from you.
I have no problem with honest critique. But once you get into talking about the ethics of a company, especially one you may have taken money from for an extended period of time, that's an entire subjective area. I'm not willing to go there with either employers or employees.
My way of avoiding the issue is to simply not work for companies I find ethically questionable. And if I were working for a company that I found later to be ethically questionable, I'd get another job.
The eight didn't get new jobs, they started a new company. It's like Activision and Atari. And it's a far cry from "I don't agree with their management structure" or "I don't like their method of crediting work" to "I think they're unethical".
> But once you get into talking about the ethics of a company, especially one you may have taken money from for an extended period of time, that's an entire subjective area.
It may sound subjective, but that's a view that may not be shared by the rest of the world; not in the eyes of the law, legislators, prospective employees. An accomplice is an accomplice. Especially in 2021, if you can't accept some responsibility to interrogate what the side effects are of the paycheck you accept (even if you don't agree with it), you're going to have a very difficult time participating in public discourse.
You could risk giving the impression that you'd rather ignore any negative externalities of your own work due to your own selfish motivations, even if it comes at the expense of society at large. And many members of society at large may deem you a coward and a liar by omission (I can't say I'd blame them). Some of those members may be people you'll never work with, but some of them may be companies that are interviewing. It won't matter if you want to hand-wave that away by saying "maybe they just have different opinions from you" -- when you're taking the money, you're either taking the responsibility that comes with taking that money or not.
Your vocation is part of your civic duty because it forms the basis for how you materially contribute to the taxes that fund society. If the path by which you earn those funds is compromised, then incrementally, so too is your little equity slice of society. Do you think it's reasonable to accept your desire to bow out from that conversation altogether? Or do you think that such a request will sound immature, evasive and tone deaf, that it will only succeed in drawing suspicion and ire?
Rather than accuse me of undercutting your point, I'd ask you to consider whether your point undercuts itself.
Bingo. If you discard my resume because I refuse to badmouth a previous employer in an interview then that's just a sign that I'd probably rather not work for you.
And as other people mentioned, especially in an interview where you're being intensely judged and have to give second thought to everything you say, I'm not about to give anything other than a neutral answer that's supposed to appease most people.
Agreed. An employer could just as easily say they don’t want anyone who bad mouths their previous employers because it reflects poorly on them.
Damned if you do, damned if you don’t.
Only if the goal is simply "get a job, anywhere". If the interview is to mutually figure out if you should work for the company, there is no such thing as a damned if you do, damned if you don't; all it can be is a "they care about what I care about" or "I don't want to work there".
Being able to pick your job that way is a privilege, and it certainly isn’t granted to everyone in all fields
Hence my first statement. That said, the whole context was about a software company.
Ah yes, screw those people for... needing to pay the bills.
If they're applying for a new job it seems like that's exactly what they're trying to do.
Yeah sure. I also discriminate against people because I don't like the color of their skin while feeling high and mighty. What you're saying is, that if company A bids more salary than company B, you reserve the prerogative to judge that anybody should join company B because you deem the totally legal company A bad, even though the principles of the labour market dictate that if company A furthers the welfare of the worker more he should join company A. This means you're denying the validity of a mechanism that you as the hiring manager -- proudly calling yourself leadership in your bio -- are exploiting yourself every day to influence the commitment of your own employees. Pretty hypocritic. Chase the money but only if I want you to! -- you wish.
I'm not sure what you're trying to say. When people have options no one chooses a job purely on economic motives. The ethics of the role and the company should absolutely be a consideration. If that's not something you consider then you may be sociopathic.
I also take exception to comparing a protected class to something like ethical behavior. I would never filter candidates by protected class and no one else should either. Past unethical behavior is absolutely something you should consider.
Look at my website (in my bio). I've done unethical things in the past. I expect potential employers to question me about them.
You should not be letting your bias and personal opinions remove hiring candidates.
Am I wrong to think Proctorio is a rubbish name.
I keep thinking 'Proctology'.
The EFF took the student defense and sued back Proctorio yesterday. Good!
https://www.eff.org/press/releases/eff-sues-proctorio-behalf...
(This was originally posted to https://news.ycombinator.com/item?id=26900217, which we merged hither.)
This isn't the same, it's two different lawsuits.
One is for a student by the name of Ian Linkletter from British Columbia.
EFF is suing on behalf of Erik Johnson from Miami.
The only thing they have in common is Proctorio.
Ah—sorry, I definitely missed that. I doubt that the distinction makes much difference for HN purposes, though, since threads on either one of those will fuse into the same discussion.
I conducted interviews at Google for about 8 years. I would often run across a fresh-from-college candidate who fell flat on their face, and all I could think was, "You've somehow managed to waste 4 years of your life and tens of thousands of dollars doing whatever it was you were doing for all that time."
Some performed so poorly that I could only conclude that they cheated their way all the way through their program, because the discrepancy between their GPA and the fact that they can't even begin to explain memset (one of many trivial examples I ran across) was so stark. Well, all that cheating certainly caught up with them when they were face-to-face with me trying to get a job. It must of been stressful and humiliating for them as they sat there hemming and hawing while I asked them elementary question after question that they couldn't even begin to answer. Or maybe not, depending on whatever lack of pride and sense of self-worth led them to cheat like they did in the first place.
I'm sure many of them managed to get a job somewhere in industry, and whoever hired them got to deal with a hire who turned out to be an imposter.
Of course I recognize that some people may have been severely impacted by the technical interview process to the point that they were intellectually paralyzed. But at least some of them I'm sure just didn't learn anything.
I was a teaching assistant for multiple courses at a UK university. There was an introductory functional programming course, and two other functional programming based courses that followed up from that. The courses all had large practical programming exercises, and exams at the end.
However, every year there would be a good amount of students that did the follow up courses, that would not even know how to define the simplest function, use/define a list, etc, when I was trying to help them work on the practical exercises. These were weekly voluntary 4 hour labs in which we were available to help them and there was no performance pressure.
Somehow they always had a solution by the end of the deadline despite getting stuck during the labs. Of course this was often clearly copied. However, I was strongly discouraged by the lecturer to ever make an issue out of this, even when at one point a student handed in a copy with someone else's name on it. Basically the university barely dared to formally accuse students, and the lecturers also didn't want to deal with the hassle.
Most of these students ended up passing the module, and later on the whole course. It made me really sad/frustrated.
It did help setting expectations when performing interviews for software engineers at certain companies I worked in the past. I would ask a candidate to use their most comfortable programming language and would start with a simple version of a problem to get them in their comfort zone. Sadly, I would often have to simplify that multiple times, and ended up with candidates that could not write a single function, for loop, while loop, interface, class, or any other component of the language they actually wanted to use.
While IMO your example was rather specific to C, I've certainly met candidates who can't write a for loop who I've felt similar things about.
Perhaps people get stressed in interviews?
For instance, I don't code in C that often, and if asked, I'm not sure I could remember off the top of my head if the function signature is `memset(dst, value, n)` or `memset(dst, n, value)`. My guess from intuition is the first, but I would be hemming and hawing a bit if asked in an interview something like that.
Also, many CS programs are easy enough that you'll never even encounter a memset.
From my personal experience (and that of many friends of mine) who interviewed at Google, you won't be asked about memset or any C-related question unless you stated you were proficient in C and picked C as your interviewing language.
Knowing this context might actually explain the situation that the parent comment describes a bit better. It is absolutely possible, as you described, to be perfectly capable within your CS niche without ever bothering with memset. But in that case, you probably won't pick C as your interviewing language and won't state that you are proficient in it. Which is what, I suspect, might have happened in that scenario the parent comment is describing.
> you won't be asked about memset or any C-related question unless you stated you were proficient in C and picked C as your interviewing language
I might not ask about C-specific context, but if you don't know how to write a value to a sequence of bytes, you have no business being a SWE at Google.
>I might not ask about C-specific context, but if you don't know how to write a value to a sequence of bytes, you have no business being a SWE at Google
I know quite a few engineers who passed Google interviews and work there, and they wouldn't know how to do this off the top of their head. I also interviewed with Google a few times, and not once was I asked a question like this.
Also seems weird you would say that they have "no business being an SWE at Google", given that this problem is pretty trivial, and any competent engineer would be able to figure it out after some quick googling. It isn't some difficult algorithmic problem, it is a very specific and small piece of trivia.
Having performed interviews at Google for about 5 years, I don't think the needing to "know how to write a value to a sequence of bytes" is representative indeed. Many of current junior candidates use Java/Python in which it seems too niche a question to be a valuable signal for a hiring committee, particularly if it's for general SWE. Either you've dealt with this problem before and you know it, or you'll struggle.
> too niche a question to be a valuable signal for a hiring committee
I feel like I must be taking crazy pills, but this thread is definitely derailed at this point, so...
I don't hesitate to take a hard stance on this. If you don't know off the top of your head how to write a value to a sequence of bytes, you are an incompetent programmer and are unfit for the profession. That is indeed a VERY strong signal to a hiring committee.
It's like interviewing to be an IP lawyer and now knowing what the types of intellectual property are, or interviewing for a copy editor and now knowing basic sentence structure. There's a baseline of knowledge that anyone ordinarily skilled in the art needs to have, and trivial operations on sequences of values is without question one of them.
> Perhaps people get stressed in interviews?
Like I said, "Of course I recognize that some people may have been severely impacted by the technical interview process to the point that they were intellectually paralyzed."
> I'm not sure I could remember off the top of my head if the function signature
That would be a terrible question. Of course I wouldn't ask that.
Fair enough, I missed that while skimming your comment, and if you agree that would be a terrible question, then I generally agree with your sentiment.
What's the difference between a competent threat, and an incompetent threat?
They actually called their software Proctorio !??!?!
From the Greek "anus"
https://en.wiktionary.org/wiki/procto-#English
I am dying lol
...Or, just maybe, from the English word "proctor", meaning someone who invigilates an exam.
Which is from the Latin "procurator" meaning overseer, unrelated to the Greek procto-
Please send the proctor to my home, I'm cramming for my final exam and need some help invigilating my webcam...
https://www.hemantmedicam.com/product/usb-video-proctoscope/
Yeah i found that..
https://en.wiktionary.org/wiki/proctor
Though in fairness proctor / procurator are extremely antiquated english words, that have survived in common usage only in the US for some reason.
Scotland’s Procurators Fiscal are fairly important and frequently referred to.
I reckon the majority of english speakers who read 'Proctorio' will immediately think anus.
I recon the majority of english speakers will not think of the Greek word for anus. Especially educated English speakers who have had exams proctored before.
Google Trends: "proctor" is dramatically more common than "proctologist". I'd go so far as to say you're objectively wrong.
https://trends.google.com/trends/explore?geo=US&q=proctor,pr...
Isn't that a measure of how often words are searched? Everyone knows what a proctologist is, and no one wants to learn any more about them than they absolutely have to learn. A comedic trope is for a sadistic doctor to brutally thrust his hand into a nitrile glove and ask if some poor sap is ready for his exam. On the other hand, those with college degrees (a minority of the population) have heard the word "proctor" on eight different occasions in their entire lives.
I think the majority of English speakers are completely unaware of this meaning. “Proctor” on the other hand, is something they would’ve heard of in an exam setting.
Mabye I'm in the minority; I'd never heard of a "Proctor", but I have heard of a "Proctologist", which is an anus doctor
Is this one of those regional things? Is the same word used throughout the entire English speaking world?
Not a native English speaker myself, so my association was to the Greek word as well.
No, it's from the word "Proctor" which is frequently a person overseeing an exam: