Settings

Theme

Learning to Code with and Without AI

austinhenley.com

33 points by sam_ezeh 2 years ago · 3 comments

Reader

sam_ezehOP 2 years ago

> Students in the Codex group made more progress during the seven sessions of training and finished significantly more tasks (91%) than the Baseline group (79%).

I expected this

> On the Immediate Post-test that was conducted a day after the training sessions, both groups performed similarly on coding tasks (Codex: 61%, Baseline: 63%) and multiple-choice questions (Codex: 49%, Baseline: 42%).

> However, on the Retention Post-test that was conducted a week later, the Codex group was able to retain what they learned slightly better on the coding tasks (Codex: 59%, Baseline: 50%) and multiple-choice questions (Codex: 44%, Baseline: 35%).

However, this completely surprised me. I assumed there would be an over-reliance on AI in the test group

  • Sakos 2 years ago

    I find the analysis of the Codex group much more interesting and I feel further studies are necessary before drawing conclusions.

        Students frequently (n=501, 30%) copied the task description to generate the entire code with no prior manual coding attempts.
    
        Sometimes (n=197, 12%) students divided the task into multiple subgoals, and asked the AI to generate *only* the first subgoal instead of the entire task. 
    
        When decomposing the task into multiple subgoals, students sometimes (n=85, 5%) asked for code that was already in their editor. 
    
        Although rarely (n=16, 1%), but sometimes students generated code after having the solution to check and compare the AI's output with their own solution. 
    
        Students occasionally (n=89, 5%) wrote prompts that were similar to pseudo-code (e.g. "for num in numbers, if num > large, set large to num"). 
    
        Although most of the times students properly tested the AI-generated code before submitting, there were several (n=63, 4%) instances in which students submitted AI code without testing it. 
    
        Although rarely, but sometimes (n=30, 2%) students actively tinkered with the AI-generated code to properly understand the syntax and logic. 
    
        Similarly, sometimes students manually added code (like `print` statements) to the AI-generated code to help them verify that it works correctly.
    
    Anecdotal evidence based on my own experience has suggested better recall of things I've engaged with and learned through ChatGPT than with standard learning through books or videos. I think the interactivity is an important aspect.
eternityforest 2 years ago

I briefly tried codeium before disabling it on account of it was using too much CPU and overheating my computer.

It's such a completely different experience. Almost the same level of difference as going from something like vanilla Notepad++ to VS Code with all the extensions set up.

I still have no interest in using AI for creative writing but definitely want to use it for coding in the future if it becomes practical for low budget projects.

I'm not exactly sure it's a great thing for very new beginners, but after that I don't see much issue besides energy use.

The stuff it does best is stuff that's basic enough to often do pretty much unconsciously, where it's easy to forget to pay full attention (Or get burnt out fast from paying close attention) and make pretty simple and boring bugs.

I guess only time will tell how one's skills and habits change after a few years with AI, maybe people will start zoning out and not fully supervising the code it writes.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection