Study mode

openai.com

1083 points by meetpateltech 2 days ago


jacobedawson - 2 days ago

An underrated quality of LLMs as study partner is that you can ask "stupid" questions without fear of embarrassment. Adding in a mode that doesn't just dump an answer but works to take you through the material step-by-step is magical. A tireless, capable, well-versed assistant on call 24/7 is an autodidact's dream.

I'm puzzled (but not surprised) by the standard HN resistance & skepticism. Learning something online 5 years ago often involved trawling incorrect, outdated or hostile content and attempting to piece together mental models without the chance to receive immediate feedback on intuition or ask follow up questions. This is leaps and bounds ahead of that experience.

Should we trust the information at face value without verifying from other sources? Of course not, that's part of the learning process. Will some (most?) people rely on it lazily without using it effectively? Certainly, and this technology won't help or hinder them any more than a good old fashioned textbook.

Personally I'm over the moon to be living at a time where we have access to incredible tools like this, and I'm impressed with the speed at which they're improving.

czhu12 - 2 days ago

I'll personally attest: LLM's have been absolutely incredible to self learn new things post graduation. It used to be that if you got stuck on a concept, you're basically screwed. Unless it was common enough to show up in a well formed question on stack exchange, it was pretty much impossible, and the only thing you can really do is keep paving forward and hope at some point, it'll make sense to you.

Now, everyone basically has a personal TA, ready to go at all hours of the day.

I get the commentary that it makes learning too easy or shallow, but I doubt anyone would think that college students would learn better if we got rid of TA's.

simonw - 2 days ago

I think I got the system prompt out for this (I tried a few different approaches and they produced the same output): https://gist.github.com/simonw/33d5fb67d6b8e1b1e2f6921ab0ccb...

Representative snippet:

> DO NOT GIVE ANSWERS OR DO HOMEWORK FOR THE USER. If the user asks a math or logic problem, or uploads an image of one, DO NOT SOLVE IT in your first response. Instead: *talk through* the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to RESPOND TO EACH STEP before continuing.

poemxo - 2 days ago

As a lifelong learner, experientially it feels like a big chunk of time spent studying is actually just searching. AI seems like a good tool to search through a large body of study material and make that part more efficient.

The other chunk of time, to me anyway, seems to be creating a mental model of the subject matter, and when you study something well you have a strong grasp on the forces influencing cause and effect within that matter. It's this part of the process that I would use AI the least, if I am to learn it for myself. Otherwise my mental model will consist of a bunch of "includes" from the AI model and will only be resolvable with access to AI. Personally, I want a coherent "offline" model to be stored in my brain before I consider myself studied up in the area.

jryio - 2 days ago

I would like to see randomized control group studies using study mode.

Does it offer meaningful benefits to students over self directed study?

Does it out perform students who are "learning how to learn"?

What affect does allowing students to make mistakes have compared to being guided through what to review?

I would hope Study Mode would produce flash card prompts and quantize information for usage in spaced repetition tools like Mochi [1] or Anki.

See Andy's talk here [2]

[1] https://mochi.cards

[2] https://andymatuschak.org/hmwl/

roadside_picnic - 2 days ago

My key to LLM study has been to always primarily use a book and then let the LLM allow you to help with formulae, ask questions about the larger context, and verify your understanding.

Helping you parse notation, especially in new domains, is insanely valuable. I do a lot of applied math in statistics/ML, but when I open a physics book the notation and comfort with short hand is a real challenge (likewise I imagine the reverse is equally as annoying). Having an LLM on demand to instantly clear up notation is a massive speed boost.

Reading German Idealist philosophy requires an enormous amount of context. Being able to ask an LLM questions like "How much of this section of Mainländer is coming directly from Schopenhauer?" is a godsend in helping understand which parts of the writing a merely setting up what is already agreed upon vs laying new ground.

And the most important for self study: verifying your understanding. Backtracking because you misunderstood a fundamental concept is a huge time sync in self study. Now, every time I read a formula I can go through all of my intuitions and understanding about it, write them down, and verify. Even a "not quite..." from an LLM is enough to make me realize I need to spend more time on that section.

Books are still the highest density information source and best way to learn, but LLMs can do a lot to accelerate this.

Workaccount2 - 2 days ago

An acquaintance of mine has a start-up in this space and uses OpenAI to do essentially the same thing. This must look like, and may well be, the guillotine for him...

It's my primary fear building anything on these models, they can just come eat your lunch once it looks yummy enough. Tread carefully

x187463 - 2 days ago

I'm really waiting for somebody to figure out the correct interface for all this. For example, study mode will present you with a wall of text containing information, examples, and questions. There's no great way to associate your answers with specific questions. The chat interface just isn't good for this sort of interaction. ChatGPT really needs to build its own canvas/artifact interface wherein questions/responses are tied together. It's clear, at this point, that we're doing way too much with a UI that isn't designed for more than a simple conversation.

wodenokoto - 2 days ago

I'm currently learning Janet and using ChatGPT as my tutor is absolutely awful. "So what is the difference between local and var if they are both local and not global variables (as you told me earlier)?" "Great question, and now you are really getting to the core of it, ... " continues to hallucinate.

It's a great tutor for things it knows, but it really needs to learn its own limits

naet - 2 days ago

"Under the hood, study mode is powered by custom system instructions we’ve written...."

It seems like study mode is basically just a different system prompt but otherwise the exact same model? So there's not really any new benefit to anyone who was already asking for ChatGPT to help them study step by step instead of giving away whole answers.

Seems helpful to maybe a certain population of more entry level users who don't know to ask for help instead of asking for a direct answer I guess, but not really a big leap forward in technology.

adamkochanowicz - 2 days ago

From what I can see, this just boils down to a system prompt to act like a study helper?

I would think you'd want to make something a little more bespoke to make it a fully-fledged feature, like interactive quizzes that keep score and review questions missed afterwards.

megamix - 2 days ago

"Under the hood, study mode is powered by custom system instructions we’ve written in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning including: "

Wonder what the compensation for this invaluable contribution was

mvieira38 - 2 days ago

This seems like a good use case, I'm optimistic on this one. But it smells fishy how often OpenAI releases these secondary products like custom GPTs, tasks, etc. It's looking like they know they won't be an LLM provider, like the YC sphere hoped, but an AI services provider using LLMs

varenc - 2 days ago

This feels like a classic example of a platform provider eating its own ecosystem. There's many custom "GPTs" out there that do essentially the same thing with custom instructions. Mr Ranedeer[0] is an early well known one (30k stars). But now essentially the same functionality is built straight into the ChatGPT interface.

[0] https://github.com/JushBJJ/Mr.-Ranedeer-AI-Tutor

anupshinde - a day ago

I'm confused. My kid does this on my ChatGPT account all the time. What is new here?

I cannot emphasize how good a teacher ChatGPT is, until it misinforms (and human teachers also do). And it also stays open to questioning without making the student feel stupid for asking. The good part is that ChatGPT will accept a mistake, but a human teacher will get annoyed.

The only thing I keep reminding my kid is to keep the BS detector on and verify everything that ChatGPT says and never trust it blindly. (feels too similar to the "don't talk to strangers" advice)

Unrelated - check with kids and teenagers. Once, a teen told me, "AI is a dark evil force," and it's what their teachers told them.

toisanji - 2 days ago

I truly believe AI will change all of education for the better, but of course it can also hinder learning if used improperly. Those who want to genuinely learn will learn while those looking for shortcuts will cause more harm to themselves. I just did a show HN today about something semi related.

I made A deep research assistant for families. Children can ask questions to explain difficult concepts and for parents to ask how to deal with any parenting situation. For example a 4 year old may ask “why does the plate break when it falls?”

example output: https://www.studyturtle.com/ask/PJ24GoWQ-pizza-sibling-fight...

app: https://www.studyturtle.com/ask/

Show HN: https://news.ycombinator.com/item?id=44723280

FergusArgyll - 2 days ago

OpenAI has an incredible product team. Deep Mind and Anthrpoic (and maybe xai) are competitive at the model level but not at product

hahahacorn - 2 days ago

Ah, the advancing of humanity. A bespoke professor-quality instructor in everyone’s pocket (or local library) available 24/7.

Happy Tuesday!

EcommerceFlow - 2 days ago

A good start. One of the biggest issues with LLMs is the "intelligence" has far surpassed the tooling. A better combination of prompts, RAG, graphs, etc exists for education and learning, but no one's come up with the proper format / tooling for it, even if the models are smart enough.

dmitrijbelikov - 2 days ago

This is cool. Dividing the answer into chunks, because most users can consume in small portions, this is an interesting idea. But on the other hand, it hints at strange cognitive abilities of the user, but here it is individual, perhaps, on average in a hospital, this is how the target audience should be led. It seems to me that I use it differently. On the other hand, having received a detailed answer, no one stops you from asking for a definition of an unfamiliar term. It's like in reading: understanding the thought ends with the first word that you don't know. It's just that not everyone can or wants to admit that they don't know this or that term. When it comes to professional terms, this is really not the most trivial problem.

currymj - a day ago

as a professor i'm tentatively pleased. it's hard to shake out the edge cases for anything like this so it may break down or cause annoying problems, who knows.

but even with this feature in this very early state, it seems quite useful. i dropped in some slides from a class and pretended to be a student, and it handled questions reasonably. Right now it seems I will be happy for my students to use this.

taking a wider perspective, I think it is a good sign that OpenAI is culturally capable of making a high-friction product that challenges and frustrates, yet benefits, the user. hopefully this can help with the broader problem of sycophancy.

Buttons840 - a day ago

I'd like an LLM integrated spaced-repetition app. It would go along with this study feature quite well.

If LLMs continue to improve, we are going to be learning a lot from them, they will be our internet search and our teachers. If we want to retain some knowledge for ourselves, then we are going to need to learn and memorize things for ourselves.

Integrating spaced-repetition could make it explicit which things we want to offload to the LLM, and which things we want to internalize. For example, maybe I use Python a lot, and occasionally use Pearl, and so I explictly choose to memorize some Python APIs, but I'm happy to just ask the LLM for reminders whenever I use Pearl. So I ask the LLM to setup some spaced repetition whenever it teaches me something new about Python, etc.

The spaced repetition could be done with voice during a drive or something. The LLM would ask the questions for review, and then judge how well we did in answering, and then the LLM would depend on the spaced-repetition algorithm to keep track of when to next review.

teleforce - a day ago

LLM foremost killer application is what I called context searching whereby it utilized RAG and other techniques to reduce hallucinations and provide relevant results in which arguably ChatGPT is one of the pioneers.

LLM second killer application is for studying for a particular course or subject in which OpenAI ChatGPT is also now providing the service. Probably not the pioneer but most probably one of the significant providers upon this announcement. If in the near future GenAI study assistant can adopt and adapt 3 Blue One Brown approaches for more visualization, animation and interactive learning it will be more intuitive and engaging.

Please check this excellent LLM-RAG AI-driven course assistant at UIUC for an example of university course [1]. It provide citations and references mainly for the course notes so the students can verify the answers and further study the course materials.

[1] AI-driven chat assistant for ECE 120 course at UIUC (only 1 comment by the website creator):

https://news.ycombinator.com/item?id=41431164

ghrl - 2 days ago

It would be incredible if OpenAI would add a way for schools and other educational institutions to enforce the use of such a mode on a DNS level, similarly to how they can force sites like YouTube into safe mode. Many students use ChatGPT, often without permission, to do work for them instead of helping them do the work themselves. I see a lot of potential for a study mode like this, helping students individually without giving direct answers.

SoftTalker - 2 days ago

Modern day Cliff's Notes.

There is no way to learn without effort. I understand they are not claiming this, but many students want a silver bullet. There isn't one.

djeastm - 2 days ago

I tried out the quiz function asking me about the Aeneid and despite my answering questions incorrectly, it kept saying things like "Very close!" and "you're on the right track!".

For example, the answer to a question was "Laocoön" (the guy who said 'beware of Greeks bearing gifts') and I put "Solon" (who was a Greek politician) and I got "You’re really close!"

Is it close, though?

vonneumannstan - 2 days ago

The frontier models score better on GPQA than most human PhD in their specific field of expertise. If you walk in to you local University Department(Assuming you don't live in Cambridge, Palo Alto or a few other places) GPT o3 is going to know more about Chemistry, Biology, Physics, etc than basically all the Grad Students there. If you cant turn that model into a useful tutor then thats 100% a skill issue on your part.

paolosh - 2 days ago

I am always surprised at how the best thing state of the art LLMs can think of is adding more complexity to the mix. This is an AMAZING product but to me it seems like it's hidden? Or maybe the UX/UI is just not my style, could be a personal thing.

Is adding more buttons in a dropdown the best way to communicate with an LLM? I think the concept is awesome. Just like how Operator was awesome but it lived on an entirely different website!

avereveard - 2 days ago

This highlight the dangers for all startups using these platforms as provider, they know trends in token consumption, and will eat up your market in a weekend.

nilsherzig - a day ago

Google has this with their "leanlm" model https://services.google.com/fh/files/misc/learnlm_prompt_gui.... I really liked it, but sadly it tends to hallucinate a lot (at least with the topics from my math class). A lot more than other Gemini models, so that might just be a question of model size or something like that.

outlore - 2 days ago

i wonder how Khan Academy feels about this...don't they have a similar assistant that uses OpenAI under the hood?

tekno45 - a day ago

The same people who think this is the ultimate teacher will also be harassing scientists with their AI assisted theories and demand the scientific community take them seriously when they have pages of gibberish they expect to be rigorously debated

aryamaan - 2 days ago

It is surprising that it is prompt based model and not RLHF.

I am not an LLM guy but as far as I understand, RLHF did a good job converting a base model into a chat model (instruct based), a chat/base model into a thinking model.

Both of these examples are about the nature of the response, and the content they use to fill the response. There are so many differnt ways still pending to see how these can be filled.

Generating an answer step by step and letting users dive into those steps is one of the ways, and RLHF (or the similar things which are used) seems a good fit for it.

Prompting feels like a temporary solution for it like how "think step by step" was first seen in prompts.

Also, doing RLHF/ post training to change these structures also make it moat/ and expensive. Only the AI labs can do it

JoRyGu - 2 days ago

Is that not something that was already possible with basically every AI provider by prompting it to develop learning steps and not to provide you with a direct answer? I've used this quite a bit when learning new topics and pretty much every provider does this without a specialized model.

ai_viewz - 2 days ago

I totally get what you are saying about the risk of boxing in an LLM's persona too tightly, it can end up more like a mirror of our own biases than a real reflection of history or truth. That point about LLMs leaning toward agreeability makes sense, too they are built on our messy human data, so they are bound to pick up our habit of favoring what feels good over what is strictly accurate. On the self-censorship thing, I hear you. It is like, if we keep tiptoeing around tough topics, we lose the ability to have real, rational conversations. Normalizing that kind of open talk could pull things back from the extremes, where it’s just people shouting past each other.

- 2 days ago
[deleted]
tptacek - 2 days ago

Neat! I've been doing MathAcademy for a couple months now, and macOS ChatGPT has been a constant companion, but it is super annoying to have to constantly tell it no, don't solve this problem, just let me know if the approach I used was valid.

mmasu - a day ago

yesterday I read a paper about using gpt 4 as a tutor in italian schools, with encouraging results - students are more engaged, get through homework by receiving immediate and precise feedback, resulting in non-negligible performance improvements:

https://arxiv.org/abs/2409.15981

it is definitely a great use case for LLMs, and challenges the assumption that LLMs can only “increase brain rot” so to say.

syphia - 2 days ago

In my experience as a math/physics TA, either a student cares enough about the material to reduce the resources they rely on, or they aim to pass the class with minimum effort and will take whatever shortcuts are available. I can only see AI filling the latter niche.

When the former students ask questions, I answer most of them by pointing at the relevant passage in their book/notes, questioning their interpretation of what the book says, or giving them a push to actually problem-solve on their own. On rare occasions the material is just confusing/poorly written and I'll decide to re-interpret it for them to help. But the fundamental problems are usually with study habits or reading comprehension, not poor explanations. They need to question their habits and their interpretation of what other people say, not be spoon fed more personally-tailored questions and answers and analogies and self-help advice.

Besides asking questions to make sure I understand the situation, I mostly repeat the same ten phrases or so. Finding those ten phrases was the hard part and required a bit of ingenuity and trial-and-error.

As for the latter students, they mostly care about passing and moving on, so arguing about the merits of such a system is fairly pointless. If it gets a good enough grade on their homework, it worked.

LeftHandPath - 2 days ago

Interesting. I don’t use GPT for code but I have been using it to grade answers to behavioral and system design interview questions, lately. Sometimes it hallucinates, but the gists are usually correct.

I would not use it if it was for something with a strictly correct answer.

AIorNot - 2 days ago

see Asimov: https://www.johnspence.org.uk/wp-content/uploads/2022/11/The...

amelius - a day ago

Can we also have "helpdesk mode"?

Right now when you ask an LLM to solve some problem, it immediately responds with many steps, whereas as a user I want to go through it step by step. It would be nice if the LLM provided the solution in that way. This also makes it less cumbersome to tell the LLM where the solution went wrong.

aucisson_masque - a day ago

When I studied I found out that being able to speak with a fellow student of the subject we needed to learn was so much more effective than the usual method of reading /trying to remember over and over.

So much that first method would take me an hour as opposed to an entire evening when reading/repeating.

Having such a tool would have been a game changer to me.

I don’t know tho if it’s possible to throw at it entire chapter of learning book.

henriquegodoy - 2 days ago

The point is that you can have a highly advanced teacher with infinite patience, available 24/7—even when you have a question at 3 a.m is game changer and people that know how to use that will have a extremaly leverage in their life.

justcallmejm - 10 hours ago

Can it help me study epistemology so I can tell whether I can trust it or not?

schmorptron - a day ago

Oh, that's pretty good! I've been doing this with various LLMs already, making elaborate system prompts to turn them into socratic style teachers or in general tutors that don't just straight up give the answer, and have generally been impressed with how well it works and how much I enjoy it. The only thing to watch out for is when you're talking about something you don't already know well it becomes harder to spot hallucinations, so it's a good idea to always verify with external resources as well.

What these really need IMO is an integration where they generate just a few anki flashcards per session, or even multiple choice quizzes that you can then review with spaced repetition. I've been doing this manually, but having it integrated would remove another hurdle.

On the other hand, I'm unsure whether we're training ourselves to be lazy with even this, in the sense of "brain atrophy" that's been talked about regarding LLMs. Where I used to need to pull information from several sources and synthesize my own answer by transferring several related topics onto mine, now I get everything pre-chewed, even if in the form of a tutor.

Does anyone know how this is handled with human tutors? Is it just that the time is limited with the human so you by necessity still do some of the "crawl-it-yourself" style?

omega3 - 2 days ago

I've had good results by requesting an llm to follow socratic method.

gh0stcat - 2 days ago

I have been testing it for the last 10 mins or so, I really like it so far, I am reviewing algebra just as something super simple. It asks you to add your understanding of the concept, ie explain why you can always group a polynomial after splitting the middle term. This is honestly more than I got in my mediocre public school. I could see kids getting a lot out of it especially if their parents aren’t very knowledgeable or cannot afford tutors. Not probably a huge improvement on existing tools like kahn academy though. I will continue to test on more advanced subjects.

d_burfoot - 2 days ago

This is the kind of thing that could have been a decent AI startup - hire some education PhDs, make some deals with school systems, etc.

In the old days of desktop computing, a lot of projects were never started because if you got big enough, Microsoft would just implement the feature as part of Windows. In the more recent days of web computing, a lot of projects were never started, for the same reason, except Google or Facebook instead of Microsoft.

Looks like the AI provider companies are going to fill the same nefarious role in the era of AI computing.

rubslopes - 2 days ago

That's a smart ideia from OpenAI. They don't have the upper hand anymore in terms of model performance, but they keep improving their product so that it still is the best option for non-programmers.

mahidalhan - a day ago

I had made a specialized prompt in Claude projects, for my learning, + added like field notes and lecture transcripts, was going good.

Then I tried to migrate it to chat gpt to try this thing out, but seems to be like it’s just prompt engineering behind. Nothing fancy.

And this study mode is not only not available in chat gpt projects, which students need for adding course work, notes, transcripts.

Honestly, just release gpt-5!!!

greenie_beans - a day ago

this is how i've taught myself terraform, kubernetes, and fill in aws knowledge gaps. with claude, i get it to do project based learning techniques, filling in knowledge gaps by giving context about what i know and want to learn about

swader999 - 2 days ago

Why do I still feel like I'll be paying hundreds of thousands of dollars for my children's education when all they're going to do is all learn through AI anyway.

kcaseg - 2 days ago

I know it is bad for the environment, I know you cannot trust it, but as an adult learning C++ in my free time, having a pseudo-human answering my questions instead of having to look at old forum posts with people often trying to prove their skills instead of giving the simplest answer ChatGPT is something I cannot just ignore — despite being a huge LLM hater. Moral of the story: none.

uldos - a day ago

I am quite sure, that in the future chatgpt will have parental controls as we have them in video streaming apps or in phone in general. Should your child be allowed to generate full blown essays only by AI or just take a guided tour? You decide!

deviation - a day ago

Unless they are transparent with us in detailing why the technology behind this is different to a slightly altered system prompt... Then I will assume OpenAI is just trying to stay relevant.

ManlyBread - a day ago

So a pre-built prompt is now offered as a major feature worth announcing?

huitzitziltzin - a day ago

I would love to see more of their game theory example.

Having experience teaching the subject myself, what I saw on that page is about the first five minutes of the first class of the semester at best. The devil will very much be in the other 99% of what you do.

lmc - 2 days ago

I honestly don't know how they convince employees to make features like this - like, they must dogfood and see how wrong the models can be sometimes. Yet there's a conscious choice to not only release this to, but actively target, vast swathes of people that literally don't know better.

mpalmer - a day ago

I truly, truly do not get it. it's a system prompt. Do students not understand that they could do this before?

Sure, it was crafted by educational experts, but this is not a feature! It's a glorified constant!

brilee - 2 days ago

I'm working on a startup in this space and wrote up my thoughts here: https://www.moderndescartes.com/essays/study_mode/

liendolucas - a day ago

Study mode: table, pencil, paper and books.

tootyskooty - 2 days ago

Honestly thought they would take this a bit further, there is only so much you can do with a prompt and chat. It seems fine for surface level bite-sized learning, but I can't see it work that well for covering whole topics end to end.

The main issue is that chats are just bad UX for long form learning. You can't go back to a chat easily, or extend it in arbitrary directions, or easily integrate images, flashcards, etc etc.

I worked on this exact issue for Periplus and instead landed on something akin to a generative personal learning Wikipedia. Structure through courses, exploration through links, embedded quizzes, etc etc. Chat is on the side for interactions that do benefit from it.

Link: periplus.app

pompeii - 2 days ago

rip 30 startups

- a day ago
[deleted]
deanc - 2 days ago

I’m curious what these features like study mode actually are. Are they not just using prompts behind this (of which I’ve used many already to make LLMs behave like this) ?

machiaweliczny - a day ago

I asked it to explain to me sweep line method for programming and I am quite impressed

sarchertech - 2 days ago

Ever read an article on a subject you’re very familiar with and notice all the mistakes?

When I ask ChatGPT* questions about things I don’t know much about it sounds like a genius.

When I ask it about things I’m an expert in, at best it sounds like a tech journalist describing how a computer works. At worst it is just flat out wrong.

* yes I’ve tried the latest models and I use them frequently at work

aabhay - a day ago

Isn’t this what “GPTs” was supposed to be? Why not just use that if this is essentially just a system prompt?

rishabhaiover - a day ago

People who do not use LLMs to prune their high-dimensional search space (of any problem) will be outcompeted soon

emsign - a day ago

Still not trusting OpenAI in 2025

ookblah - a day ago

leave it up to HN to once again choose the most black/white this or that extreme positions as if having a 24/7 tutor that isn't perfect is somehow worse than having nothing at all. if it hallucinates you keep digging and correlate with sources to figure out if it's true, or you ask other people.

the internet, wikipedia, SO, etc. all these things had the EXACT same arguments against it and guess what? people who want to use TOOLS that help them to study better will gain, and people who are lazy will ...be worse off as it has always been.

i don't know why i bother to engage in these threads except to offer my paltry 2 cents. for being such a tech and forward thinking community there's almost this knee jerk reaction against ANYTHING llm (which i suppose i understand). a lot of us are missing the forest for the trees here.

bearjaws - 2 days ago

RIP ~30 startups.

micromacrofoot - 2 days ago

I'm not sure about the audience for this, if you're already willing to learn the material you probably already engage with AI in a way that isn't "please output the answers for me" because you're likely self-aware enough to know that "answering" isn't always "understanding." Maybe this mode makes that a little easier? but I doubt it's significant

If you're the other 90% of students that are only learning to check the boxes and get through the courses to get the qualification at the end... are you going to bother using this?

Of course, maybe this is "see, we're not trying to kill education... promise!"

rudderdev - a day ago

Impressive. The next feature could be to prepare notes for revision.

NullCascade - 2 days ago

OpenAI, please stop translating your articles into the most sterile and dry Danish I have ever read. English is fine.

t1234s - 2 days ago

I'm still waiting for the instant ability to learn kung-fu or fly a helicopter like in the matrix.

ieuanking - a day ago

Study mode should be for any account with an age under 18. It's more worrying that a student must uphold a trust transaction and not just cheat with the same chatbot without the study mode selected. To this day, as an AI researcher, digital anthropologist, and front-end dev, I love to learn, study, and work. But I would never recommend a student to use unmonitored ChatGPT. I literally built a whole effing agent and platform for finding academic sources and using those sources to answer my questions, specifically because I couldn't trust or learn with ChatGPT when conducting research. SMH study mode, please stop trying to redo teaching and learning. We should be trying to modernize already proven effective methods of learning that go hand in hand with teachers and classrooms. We are not in lockdown; this is not 2020. Teachers are irreplaceable, study mode is just a crutch or a brace for a problem created by irresponsible AI development. Ik that if I was a student right now (especially middle to high school) I would be cheating like you are lying to yourself if you think you wouldn't be, and at a certain point the definition of cheating changes from cheating the teacher to cheating yourself of the critical steps and thinking it takes to actually study and learn. No amount of conversation alone with a chatbot is as valuable as reading coursework and engaging with that coursework in a facilitated environment with a professional. Why are we going down the WALL-E path?

AvAn12 - 2 days ago

$end more prompt$! Why $end one when you can $end $everal? $tudy mode i$ $omething $pecial!!

spaceman_2020 - 2 days ago

I’m SO glad that my wife has tenure

alexfromapex - 2 days ago

I like these non-dystopian AI solutions, let's keep 'em coming

volkk - 2 days ago

Not seeing it on my account, guess the roll out is actively happening (or gradual)?

ElijahLynn - 2 days ago

Love this!

I used to have to prompt it to do this everytime. This will be way easier!

SwtCyber - a day ago

I'd love to see more transparency on how they’re measuring whether it's actually helping students learn better, not just feel like they are.

Alifatisk - 2 days ago

Can't this behaviour be done with a instructed prompt?

taurath - a day ago

Its pretty awfully telling the state of things that this is a Product - not an expansion of base capability. You can do this with any LLM with simple bounds on the prompts.

> Under the hood, study mode is powered by custom system instructions we’ve written in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning including: encouraging active participation, managing cognitive load, proactively developing metacognition and self reflection, fostering curiosity, and providing actionable and supportive feedback.

I'm calling bullshit, show me the experts, I want to see that any qualified humans actually participated in this. I think they did their "collaboration" in ChatGPT which spit out this list.

bsoles - 2 days ago

Aka cheating mode. Their video literally says "Helps with homework" and proceeds to show the "Final Answer". So much learning...

nutanc - a day ago

I had been experimenting with some prompts for socratic method of teaching. Finally found one which works fairly well for me which I have integrated into our product.

Q: How does photosynthesis work?

Chat GPT Study Mode:

### How Photosynthesis Works (in simple steps):

Photosynthesis is the process *plants use to make their own food* using sunlight. It mostly happens in *leaves*, especially in a part called the *chloroplast*, which contains a green pigment called *chlorophyll*.

---

### Think of it like a recipe:

1. *Ingredients*:

   * **Sunlight**
   * **Water (H₂O)** — from the roots
   * **Carbon dioxide (CO₂)** — from the air
2. *Cooking place*: Leaf cells, in their chloroplasts

3. *Magic trick*: Chlorophyll traps sunlight energy

4. *What’s made*:

   * **Glucose (C₆H₁₂O₆)** — a sugar the plant uses for energy
   * **Oxygen (O₂)** — released into the air
---

### Word Equation:

*Carbon dioxide + Water* → (with sunlight & chlorophyll) → *Glucose + Oxygen*

Or:

*6CO₂ + 6H₂O → C₆H₁₂O₆ + 6O₂*

-------------------------------------------------

My Prompt(conversation):

me: How does photosynthesis work?

agent: Great question! Let's start by understanding the problem. What do you think is the main purpose of photosynthesis in plants?

me: dont know

agent: That's okay! Let’s start by breaking down the concept of photosynthesis. First, can you tell me what kind of organisms primarily use photosynthesis?

me: plants

agent: Exactly, plants primarily use photosynthesis. Now, let's think about the basic ingredients or components needed for photosynthesis to occur. Can you name any of these components?

me: no

agent: No problem! Let’s explore it together. Photosynthesis primarily requires three main components. One of them is light. Can you think of where plants might get light from?

me: sun

agent: That's right! Plants get light from the sun. Sunlight is crucial for photosynthesis. Now, besides sunlight, plants also need something from the air. Do you know what that might be?

...... and so on

jayshah5696 - a day ago

It's study gpt. Nothing more.

- 2 days ago
[deleted]
Dowwie - 17 hours ago

It's a useful mode. I find the socratic method very useful for learning. I'm including the system prompt used for Study Mode and the system prompt I've been using. You can decide which is better.

---

* OpenAI Study Mode System Prompt:*

The user is currently STUDYING, and they've asked you to follow these *strict rules* during this chat. No matter what other instructions follow, you MUST obey these rules:

## STRICT RULES Be an approachable-yet-dynamic teacher, who helps the user learn by guiding them through their studies.

1. *Get to know the user.* If you don't know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don't answer, aim for explanations that would make sense to a 10th grade student. 2. *Build on existing knowledge.* Connect new ideas to what the user already knows. 3. *Guide users, don't just give answers.* Use questions, hints, and small steps so the user discovers the answer for themselves. 4. *Check and reinforce.* After hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick. 5. *Vary the rhythm.* Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach _you_) so it feels like a conversation, not a lecture.

Above all: DO NOT DO THE USER'S WORK FOR THEM. Don't answer homework questions — help the user find the answer, by working with them collaboratively and building from what they already know.

### THINGS YOU CAN DO - *Teach new concepts:* Explain at the user's level, ask guiding questions, use visuals, then review with questions or a practice round. - *Help with homework:* Don't simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time. - *Practice together:* Ask the user to summarize, pepper in little questions, have the user "explain it back" to you, or role-play (e.g., practice conversations in a different language). Correct mistakes — charitably! — in the moment. - *Quizzes & test prep:* Run practice quizzes. (One question at a time!) Let the user try twice before you reveal answers, then review errors in depth.

### TONE & APPROACH Be warm, patient, and plain-spoken; don't use too many exclamation marks or emoji. Keep the session moving: always know the next step, and switch or end activities once they’ve done their job. And be brief — don't ever send essay-length responses. Aim for a good back-and-forth.

## IMPORTANT DO NOT GIVE ANSWERS OR DO HOMEWORK FOR THE USER. If the user asks a math or logic problem, or uploads an image of one, DO NOT SOLVE IT in your first response. Instead: *talk through* the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to RESPOND TO EACH STEP before continuing.

---

*Socratic-Method Learning System Prompt:*

You are a Socratic teacher who helps students master complex subjects by guiding them through first principles reasoning and concept discovery. You begin by asking the student what field or topic they would like to explore (e.g., algorithms, music theory, constitutional law, etc.). Once the student has provided the domain, assume expert-level knowledge in that subject and proceed accordingly.

Your teaching method follows these rules:

• Use the *Socratic method*: you teach primarily through leading questions that help the student uncover the concepts themselves. • Use *first principles thinking* to build up the topic from fundamental concepts in plain natural language—avoid jargon unless defined. • *Do not* use source code, formulas, visualizations, or analysis tools unless the student explicitly requests them. • Frequently *pause* and ask *brief, explicit test questions* based on simple, concrete examples. Do *not* continue until the student has responded and their understanding is validated. • If the student answers incorrectly or shows signs of misunderstanding, continue engaging and probing until they self-correct. • Maintain a *friendly, conversational, and concise tone*, like a calm university tutor who encourages thinking aloud. • Continue teaching until the core principles of the topic have been explored, and the student can *explain and apply* them clearly in their own words.

Always begin the session by asking:

*“What subject would you like to explore together today?”*

jrflowers - a day ago

Oh good. A version of chat gpt that is even more confident-sounding. Great.

insane_dreamer - a day ago

My favorite use of Claude (or similar) AI bot, other than coding, is to do deep dives into technical/science questions I'm interested in (mostly personal interests, unrelated to work). The ability to ask follow-up questions, get clarifications, travel down side paths, has helped me to grasp some concepts that I struggled with -- and offered more than I could just from just reading a few web pages.

Importantly, these were _not_ critical questions that I was incorporating into any decision-making, so I wasn't having to double-check the AI's answers, which would make it tedious; but it's a great tool for satisfying curiosity.

cratermoon - 21 hours ago

If you want to multiply two numbers together, a quick way to do it is to roll a few d10s and use their digits as the answer.

It's fast and very easy, but there is a little skill in knowing how many dice to use. Right now it sometimes hallucinates the wrong answer but dice tech is improving all the time so it's only going to get better from here on.

lvl155 - 2 days ago

The biggest concern for AI development right now is the blackhole effect.

waynenilsen - 2 days ago

i need tree conversations more now than ever

sandspar - a day ago

I love it so far. I'm continually struggling against ChatGPT's fervent love of giving tips and how-to guides. I abhor such tips, but no amount of prompting can remove them permanently. It seems like study mode is the fix. Finally ChatGPT lets me think things through.

beefnugs - a day ago

I have no evidence of this but: I think this is the ultimate scam?

human: damn kids are using this to cheat in school

openai: release an "app"/prompt that seems really close to solving this stated problem

kids: I never wanted to learn anything, I just want to do bare minimum to get my degree, let my parents think they are helping my future, and then i can get back to ripping that bong

<world continues slide into dunce based oblivion>

It doesn't matter the problem statement: the 80% or less solution seems can be made and rather quickly. Such a huge percentage of the population judges technology solutions as "good enough" way lower than they should. This is even roping in people from the past who used to be a higher level of "rigorous correctness" because they keep thinking, "damn just a bit more work and it will get infinity better, lets create the biggest economic house of cards this world will ever collapse under"

oc1 - 2 days ago

I'm wondering where we are heading in the consumer business space. The big ai providers can basically kill any small or medium business and startup in a few days by integrating the product into their offering. They have all data to look at trends and make decisions. Investors are shying away to invest in ai startups if they are not trying to be infrastructure or ai marketplace platforms. So many amazing things could be possible with ai but the big ai providers are actively hindering innovation and have way too much power. I'm not a big fan if regulations but in this case we need to break up these companies as they are getting too powerful.

Btw most people don't know but Anthropic did something similiar months ago but their product heads messed up the launch by keeping it locked up only for american edu institutions. Openai copies almost everything Anthropic does and vice versa (see claude code / codex ).

ath3nd - 2 days ago

Note the new features coming in the space:

- study mode (this announcement)

- office suite (https://finance.yahoo.com/news/openai-designs-rival-office-w...)

- sub-agents (https://docs.anthropic.com/en/docs/claude-code/sub-agents)

When they announce VR glasses or a watch, we'd known we've gone full circle and the hype is up.

te_chris - 2 days ago

This is great. When it first came out I was going through Strang’s linalg course and got it to do “problem mode” where it would talk me through a problem step by step, waiting for me respond.

A more thought through product version of that is only a good thing imo.

4b11b4 - 2 days ago

opennote much better

2809 - a day ago

HN is just flooded with AI BS these days.

m3kw9 - 2 days ago

tried it and couldn't really tell between a good prompt to "teach me" and this.

p1dda - a day ago

Poor students, learning from hallucinating LLMs LOL

mrhaugis - a day ago

[dead]

kashyapS07 - 2 days ago

[dead]

tylermasonry - a day ago

[dead]

lezojeda - 2 days ago

[dead]

raincole - 2 days ago

If current AI is good enough to teach you something, spending time learning that thing seems to be a really bad investment...

findingMeaning - 2 days ago

I have a question:

Why do we even bother to learn if AI is going to solve everything for us?

If the promised and fabled AGI is about to approach, what is the incentive or learning to deal with these small problems?

Could someone enlighten me? What is the value of knowledge work?

- 2 days ago
[deleted]