This is a bit of sidetrack, but in case someone is interested in reading their history more easily. My conversations.html export file was ~200 MiB and I wanted something easier to work with, so I've been working on a project to index and make it searchable.
It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
Do you know if this is available in the actual web interface, and just not displayed, or is it just in the data export? If it is in the web, maybe a browser extension would be worth making.
My guess is that including timestamps in messages to the LLM will bias the LLMs responses in material ways, and ways they don't want, and showing timestamps to users but not the LLM will create confusion when the user assumes the LLM is aware of them but it isn't. So the simple product management decision was to just leave them out.
I could definitely see that being an issue, but like with so many UX decisions, I wish they would at least hide the option somewhere in a settings menu.
I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.
That's no excuse imho. I see 2 different endpoints, 1 for llm stream and 1 for msg history (with stamps).
New timestamps could be added FE as new messages start without polluting the user input for example
ChatGPT still does not display per-message timestamps (time of day / date) in conversations.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
Isn't it just simpler to believe that ChatGPT doesn't have timestamps because... they never added them? It wasn't in the original MVP prototype and they've just never gotten around to it?
Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.
They exist in the exported data. It'd require a weekend's worth of effort to roll out a new feature that gives users a toggle to turn timestamps off and on.
It's trivial, but we will never see it. The people in charge of UX/UI don't care about what users say they want, they all know better.
Yeah… even in the Web interface if you crack open Developer Tools and look at the json, the timestamps are all there, available in the data model. Those values are simply not displayed to the end user.
I was looking to write a browser extension and this was a preliminary survey for me.
There’s a very long list of “weekends’s worth of effort” jobs that exist in our product that’ll probably never get done because of just the general distinctions of product development instead of some conspiracy by Big Designer.
People on HN are not regular users in any way, shape or form.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
UX/UI research if it exists at all is akin to religious healers who touch you on your head and bam you can suddenly walk after spending 25 years in a wheelchair.
I say that 99.5% of the UI/UX blog posts I've read in the last 10 years were all hogwash. Gloating about spacing, gaps, unnecessary I know this better mantra that leads to nowhere.
And it shows. Show me a platform where you have proper user experience and not some overgeneralized ui, that reeks of bad design. Also, defaults used everywhere.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
There is a non-trivial number of people who get an adverse reaction to anything technical, including the language of technical - numbers. Numbers are the language of confusion, not getting it, feeling inadequate, nerds and losers, stupid math, and the "cold dead machines".
The thing is that people who are fine with numbers will still use those products anyway, perhaps mildly annoyed. People who hate numbers will feel a permeating discomfort and gravitate towards products that don't make them feel bad.
It's something extremely pervasive in modern design language.
It actually infuriates me to no end. There are many many many instances where you should use numbers but we get vague bullshit descriptions instead.
My classic example is that Samsung phones show charging as Slow, Fast, Very fast, Super fast charging. They could just use watts like a sane person. Internally of course everything is actually watts and various apps exist to report it.
Another example is my car shows motor power/regen as a vertical blue segmented bar. I'm not sure what the segments are supposed to represent but I believe its something like 4kW or something. If you poke around you can actually see the real kW number but the dash just has the bar.
Another is WiFi signal strength which the bars really mean nothing. My router reports a much more useful dBm measurement.
Thank god that there are lots of legacy cases that existed before the iPhone-ized design language started taking over and are sticky and hard to undo.
I can totally imagine my car reporting tire pressure as low or high or some nonsense or similarly I'm sure the designers at YouTube are foaming at the mouth to remove the actual pixel measurements from video resolutions.
It's all rather dumb, but your examples are really counterexamples, because a watt is sadly not something most people understand. One would at minimum need to have passed a physics class, and even that doesn't necessarily leave a person with an intuitive, visceral understanding of what a watt is, feels like, can do. I appreciate my older Samsung phone that just converts it into expected time until full charge. That's the number that matters to me anyway, and I can make my own value judgment about how "super" the fastness is. But I do agree with your point and would be pissed if they dumbed it down to Later, Soon, Very Soon and Super Soon.
Speaking of time and timestamps, which I would've thought were straightforward, I get irked to see them dumbed-down to "ago" values e.g. an IM sent "10 minutes ago" or worse "a day ago." Like what time of day, a day ago?
And just through exposure over time they'd learn "my phone usually charges around X" and be able to see if their new cable is actually charging faster or not.
In US, washing machines have "cold", "warm", "hot" settings. In Europe, you have a temperature knob "30C", "40C", "60C".
Like you, I don't buy the argument that people are actually too dumb to deal with the latter or are allergic to numbers. People get used to and make use of numbers in context naturally if you expose them.
I have a machine which has cold/warm/hot because it doesn't heat water by itself, it just takes whatever hot water there exists in the house (and "warm" means 50% hot water and 50% cold).
I still think anyone who grew up with such a machine would be able to graduate to a numerical temp knob without having a visceral reaction over the numbers every time they do laundry.
Well, that's obviously an exaggeration, but in any case, there's a choice here. Historically interface designers expected users to read a manual, and later to at least go through some basic onboarding and then read the occasional "tip of the day", before finally arriving at the current "don't make me think" approach. It's not too late to expect people to think again.
At the start of 2025 I stopped buying Spotify and started buying Apple Music because I felt manipulated by the Spotify application's metrics-first design.
I felt that Spotify was trying to teach me to rely on its automated recommendations in place of any personal "musical taste", and also that those recommendations were of increasingly (eventually, shockingly), poor quality.
The implied justification for these poor recommendations is a high "Monthly Listener Count". Don't mind that Spotify can guarantee that any crap will have a high listener count by boosting it's place in their recommendation algorithm.
I think many people may have a similar experience on once-thriving social media platforms like facebook/instragram/X.
What I mean to say is that I think people associate the experience of being continually exposed to dubiously sourced and dubiously relevant metrics with the feeling of being manipulated by illusions of scale.
I actually agree there's an issue here. I feel we've been dumbing down interfaces so much, to the extent that people who in previous generations would barely write and who wouldn't affect anyone outside their close friends and family, now having their voice algorithmically amplified to millions. And given that the algorithms care only about engagement, rather than eloquence (let alone veracity), these people end up believing that their thoughts are as valid regardless of substance, and that there's nothing they could gain by learning numeracy.
EDIT: It's not a new issue, and Asimov phrased it well back in 1980, but I feel it got much worse.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge'.
I tried to play a game with some family this weekend. It requires using your phone. Literally every turn I had to answer someones question with "READ YOUR FUCKING PHONE ITS TELLING YOU WHAT TO DO RIGHT THERE" "where" "REEEAAAAAD"
We humans use timestamps in conversations to reference a persons particular state of reference at a given point in time.
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
It’s better for them if you don’t know how long you’ve been talking to the LLM. Timestamps can remind you that it’s been 5 hours: without it you’ll think less about timing and just keep going.
Your suggestion is to not use the platform as intended, and to understand the source code of the extension. That advice is not actionable by non-technical people and does not help mitigate mass surveillance.
Ok, should we just use the provided 'app' and assume things are fine? FAANG or whoever take our privacy and security very seriously, you know!
The only reasonable approach is to view the code that is run on your system, which is possible with a extension script, and not possible with whatever non-technical people are using.
I don't know what point you're trying to make, but I already expect OpenAI to maintain records of my usage of their service. I do not however want other parties to be privy to this data, especially without my knowledge or consent.
My honest opinion, which may be entirely wrong but remains my impression, is:
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit
Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
After whatever quota of free GPT-5 messages is exhausted, `mini` should answer most replies, unless they're policy sensitive, which get full-fat `GPT-5 large` with the Efficient personality applied, regardless of user settings, and not indicated. I'm fairly confident that this routing choice, the text of Efficient [1], and the training of the June 2024 base model to the model spec [2] is the source of all the sophistic behavior you observe.
I am interested in studying this beyond assumption and guesswork, therefore will be reading your references.
I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.
What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.
ChatGPT to this day does not have a single simplest feature -- fork chat from message.
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Well apparently 3 years later they did a thing. I asked about it so many times I didn't even bother to check if they added it.
Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.
This is a big use-case for me that I've gotten used to while using Open-WebUI. Being able to easily branch conversations, edit messages with information from a few messages downstream to 'compact' the chat history, completely branch convos. They have a tree view, too, which works pretty well (the main annoyances are interface jumps that never seem to line up properly).
This feature has spoiled me from using most other interfaces, because it is so wasteful from a context perspective to need to continually update upstream assumptions while the context window stretches farther away from the initial goal of the conversation.
I think a lot more could be done with this, too - some sort of 'auto-compact' feature in chat interfaces which is able to pull the important parts of the last n messages verbatim, without 'summarizing' (since often in a chat-based interface, the specific user voicing is important and lost when summarized).
This is a constant frustration for me with Gemini. Especially since things like Deep Research and Canvas mode lock you in, seemingly arbitrary. LLMs to my understanding are Markovian prompt-to-prompt, so I don't see why this is an issue at all.
The lack of visible timestamps feels small, but it actually creates a subtle fidelity problem. Conversations imply continuity that may not exist. Minutes, hours, or days collapse into the same narrative flow.
When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.
It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.
Claude's web interface has an elegant solution. When you roll the mouse over one of your prompts, it has the abbreviated date in the row of Retry/Edit/Copy icons, e.g. "Dec 17". Then if you roll the mouse over that date, you get the full date and time, e.g. "Dec 17, 2025, 10:26 AM".
This keeps the UI clean, but makes it easy to get the timestamp when you want it.
Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:
Dec 17, 2025, 10:26 AM [I added this here]
Copy Message
Select Text
Edit
ChatGPT could simply do the same thing for both web and mobile.
Just a note to those adding the time to the personalization response. It’s inaccurate. If you have an existing chat, the time is near the last time you had that chat session active. If you open a new one, it can be off by + or - 15 minutes for some reason
I was using a continuous conversation with chatgpt to keep track of my lifts, and then I realize it never understand what day I'm talking to it, like there is no consistency, it might as well be the date of the first message you sent
I think that’s exactly why they’re not including timestamps. If timestamps are shown in the UI users might expect some form of “time awareness” which it doesn’t quite have. Yes you can add it to the context but I imagine that might degrade other metrics.
Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.
I ask it to continuously tell me when I break personal records and what muscle groups Ive been focusing on in the last day (and what exercises I should probably do next). It doesnt work super well at doing any of these except tracking PRs
It’s an incredible tool for weightlifting. I use it all the time to analyze my workout logs that I copy/paste from Apple Notes.
Example prompts:
- “Modify my Push #2 routine to avoid aggravating my rotator cuff”
- “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”
- “Are my legs hamstring or glute dominant? How should I adjust training”
- “Critique my training program and suggest optimizations”
That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.
That's brilliant. I have an injury for a while now, and I change my routine on the fly at the gym, depending on whether I still feel pain or not. Much better if I change it before the next time I go, so I don't waste time figuring out what to replace.
My biggest complaint about ChatGPT is how slow their interface is when the conversations get log. This is surprising to me given that it's just rendering chats.
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.
The only (silly) reason I can think of is that a non trivial number of people copy pasta directly from chatgpt responses and having the timestamp there would be annoying.
I built a single page website that copies the current time to my clipboard and I paste it into my messages. It's inconvenient and I don't do it irregularly.
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
You only need that info if you know you need it in your rag. Over the last two years of usage I don't recall where I'd need those timestamps but I know there are cases. Still, this would have to be an option because otherwise it would be waste of tokens. However, we have to consider they are competing for the quality AND length of the response even if a shorter response is better. There's a pretzel of considerations when talking about this.
Imagine you started having back pain months ago and you remember asking ChatGPT questions when it first started.
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
Beyond the lack of timestamps, ChatGPT produces oddly formatted text when you copy answers. It’s neither proper markdown nor rich text. The formatting is consistently off: excessive newlines between paragraphs, strangely indented lists, and no markdown support whatsoever.
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
It's ugly, why it isn't at least exposed as an option to enable for power users would make me look at some advantage time stamps would give to an inference scraper or possibly their service APIs don't have contemporaneous access to the metadata available from the web interface.
Just like on a piece of hardware that doesn't have a RTC, we rely on NTP. Maybe we just need an NTP MCP for the agents. Looks like there are several open-source projects already but I'm not linking to them because I don't know their quality or trust.
Other than the potential liability, cost may also be a factor.
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
I think "thank you" are used for inference in follow-up messages, but not necessarily timestamps.
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
Altman was being dumb; being polite to LLMs makes them produce higher quality results which results in less back-and-forth, saving money in the long run.
They must have a small team for the UI and probably don't consider it part of their goals for long-term profitability? UI enhancements like this are surprisingly slow for a company with this much funding
Time stamps? lol
They still don’t have the option to search your previous history.
Luckily I built an extension that stores all chats locally to a database so I can reference and view offline if I want too. Time stamps included.
What annoys me even more is that ChatGPT doesn't alert you, when you near the context window limit. I have a chat which I've worked on for a year and now hit the context window. I've worked around this by doing a GDPR download of all messages, re-constructed the conversation inside a markdown file and then gave that file to claude to create a summarized / compacted version of that chat...
The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:
It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
https://github.com/gnyman/llm-history-search
Check this project I've been working on which allows you to use your browser to do the same, everything being client-side.
https://github.com/TomzxCode/llm-conversations-viewer
Curious to get your experience trying it!
Look for this API call in Dev Tools: https://chatgpt.com/backend-api/conversation/<uuid>
I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.
Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.
It's trivial, but we will never see it. The people in charge of UX/UI don't care about what users say they want, they all know better.
I was looking to write a browser extension and this was a preliminary survey for me.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
It's the Apple story all over again.
https://lawsofux.com/cognitive-load/
Yeah, we know. This is why there are defaults and only defaults.
Hogwash.
or UX doesn’t exist?
And it shows. Show me a platform where you have proper user experience and not some overgeneralized ui, that reeks of bad design. Also, defaults used everywhere.
It's the Apple story all over again.
https://lawsofux.com/cognitive-load/
You have to drag-over for any detail.
What does this even mean
The thing is that people who are fine with numbers will still use those products anyway, perhaps mildly annoyed. People who hate numbers will feel a permeating discomfort and gravitate towards products that don't make them feel bad.
I think we need to give people slightly more credit. If this is true, maybe its because we keep infantalising them?
I genuinely can't tell if this is sarcasm or not.
An adverse reaction to equations, OK. Numbers themselves, I really don't know what you're talking about.
It actually infuriates me to no end. There are many many many instances where you should use numbers but we get vague bullshit descriptions instead.
My classic example is that Samsung phones show charging as Slow, Fast, Very fast, Super fast charging. They could just use watts like a sane person. Internally of course everything is actually watts and various apps exist to report it.
Another example is my car shows motor power/regen as a vertical blue segmented bar. I'm not sure what the segments are supposed to represent but I believe its something like 4kW or something. If you poke around you can actually see the real kW number but the dash just has the bar.
Another is WiFi signal strength which the bars really mean nothing. My router reports a much more useful dBm measurement.
Thank god that there are lots of legacy cases that existed before the iPhone-ized design language started taking over and are sticky and hard to undo.
I can totally imagine my car reporting tire pressure as low or high or some nonsense or similarly I'm sure the designers at YouTube are foaming at the mouth to remove the actual pixel measurements from video resolutions.
Speaking of time and timestamps, which I would've thought were straightforward, I get irked to see them dumbed-down to "ago" values e.g. an IM sent "10 minutes ago" or worse "a day ago." Like what time of day, a day ago?
Like you, I don't buy the argument that people are actually too dumb to deal with the latter or are allergic to numbers. People get used to and make use of numbers in context naturally if you expose them.
I still think anyone who grew up with such a machine would be able to graduate to a numerical temp knob without having a visceral reaction over the numbers every time they do laundry.
I felt that Spotify was trying to teach me to rely on its automated recommendations in place of any personal "musical taste", and also that those recommendations were of increasingly (eventually, shockingly), poor quality.
The implied justification for these poor recommendations is a high "Monthly Listener Count". Don't mind that Spotify can guarantee that any crap will have a high listener count by boosting it's place in their recommendation algorithm.
I think many people may have a similar experience on once-thriving social media platforms like facebook/instragram/X.
What I mean to say is that I think people associate the experience of being continually exposed to dubiously sourced and dubiously relevant metrics with the feeling of being manipulated by illusions of scale.
EDIT: It's not a new issue, and Asimov phrased it well back in 1980, but I feel it got much worse.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge'.
I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.
[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...
Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.
Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”
If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.
It's irresponsible for OpenAI to let this issue be solved by extensions.
Also, they're easy to write for simple fixes rather than having to find, vet, and then install a regular extension that brings 600lbs of other stuff.
Don't install from the web store. Those ones can auto-update.
The only reasonable approach is to view the code that is run on your system, which is possible with a extension script, and not possible with whatever non-technical people are using.
https://github.com/Hangzhi/chatgpt-timestamp-extension
https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
[1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>
[2] <https://model-spec.openai.com/2025-02-12.html>
I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.
What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.
Anyway, thanks.
That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).
Gemini btw too.
https://twitter.com/OpenAI/status/1963697012014215181?lang=e...
Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.
Just edit a message and it’s a new branch.
This feature has spoiled me from using most other interfaces, because it is so wasteful from a context perspective to need to continually update upstream assumptions while the context window stretches farther away from the initial goal of the conversation.
I think a lot more could be done with this, too - some sort of 'auto-compact' feature in chat interfaces which is able to pull the important parts of the last n messages verbatim, without 'summarizing' (since often in a chat-based interface, the specific user voicing is important and lost when summarized).
I don't see them on their mobile app though.
When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.
It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.
This keeps the UI clean, but makes it easy to get the timestamp when you want it.
Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:
ChatGPT could simply do the same thing for both web and mobile.Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.
Example prompts:
- “Modify my Push #2 routine to avoid aggravating my rotator cuff”
- “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”
- “Are my legs hamstring or glute dominant? How should I adjust training”
- “Critique my training program and suggest optimizations”
That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.
Cardio goals, current FTP, days to train, injuries to avoid
3 lift day programs with tracking 8w progressive Loop my PT into warm ups
Alternate suggestions.
Use whole sheet to get an overview of how the last 8w went and then change things up
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.
I’m not suggesting this is sufficient, I’m just noting there is somewhere in the user interface that it is displayed.
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
The painful slowness of long chats (especially in thinking mode for some reason) demonstrates this.
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
I would be very surprised if they don’t already store date/time metadata. If they do, it’s just a matter of exposing it.
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
if response == 'thank you': print('your welcome')
It just isn't even close at this point for my uses across multiple domains.
It even makes me sad because I would much rather use chatGPT than Google but if you plotted my use of chatGPT it is not looking good.
As the companies sprint towards AGI as the goal the floor for acceptable customer service has never been lower. These two concepts are not unrelated.
Claude Sonnet is my favorite, despite occasionally going into absurd levels of enthusiasm.
Opus is... Very moody and ambiguous. Maybe that helps with complex or creative tasks. For conversational use I have found it to be a bit of a downer.