Source Details

View detailed information about this source submission and its extracted claims.

Back to Sources
Screenshot of https://nytimes.com/2026/02/09/well/chatgpt-health-advice.html
19 claims πŸ”₯
2 months ago
https://nytimes.com/2026/02/09/well/chatgpt-health-advice.html

A new study reveals that AI chatbots often provide inaccurate medical advice, performing no better than Google. The study highlights risks like false information and inconsistent advice based on question wording.

AI Extracted Information

Automatically extracted metadata and content analysis.

AI Headline
Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows
Simplified Title
AI Chatbots Provide Inaccurate Health Advice Study Shows
AI Excerpt
A new study reveals that AI chatbots often provide inaccurate medical advice, performing no better than Google. The study highlights risks like false information and inconsistent advice based on question wording.
Subject Tags
Artificial Intelligence AI Chatbots Healthcare Medical Advice Health Information Medical Research Misinformation
Context Type
News
AI Confidence Score
1.000
Context Details
{
    "tone": "informative",
    "perspective": "neutral",
    "audience": "general",
    "credibility_indicators": [
        "peer_reviewed",
        "expert_quotes",
        "data_cited"
    ]
}

Source Information

Complete details about this source submission.

Overall Status
Completed
Submitted By
Donato V. Pompo
Submission Date
February 11, 2026 at 1:26 PM
Metadata
{
    "source_type": "extension",
    "content_hash": "badfa8b414d44fc223a63242a02a8b840fc10cfca66a765fd2d0f85dd42231c2",
    "submitted_via": "chrome_extension",
    "extension_version": "1.0.18",
    "original_url": "https:\/\/www.nytimes.com\/2026\/02\/09\/well\/chatgpt-health-advice.html?campaign_id=9&emc=edit_nn_20260211&instance_id=170910&nl=the-morning&regi_id=122976029&segment_id=215096&user_id=b25c5730c89e0c73f75709d8f1254337",
    "parsed_content": "AdvertisementSKIP ADVERTISEMENTSupported bySKIP ADVERTISEMENTHealth Advice From A.I. Chatbots Is Frequently Wrong, Study ShowsIn part, the problem has to do with how users are asking their questions.Listen to this article \u00b7 7:04 min Learn moreShare full articleCredit...Andria Lo for The New York TimesBy Teddy RosenbluthFeb. 9, 2026A new study published Monday provided a sobering look at whether A.I. chatbots, which have fast become a major source of health information, are, in fact, good at providing medical advice to the general public.The experiment found that the chatbots were no better than Google \u2014 already a flawed source of health information \u2014 at guiding users toward the correct diagnoses or helping them determine what they should do next. And the technology posed unique risks, sometimes presenting false information or dramatically changing its advice depending on slight changes in the wording of the questions.None of the models evaluated in experiment were \u201cready for deployment in direct patient care,\u201d the researchers concluded in the paper, which is the first randomized study of its kind.In the three years since A.I. chatbots were made publicly available, health questions have become one of the most common topics users ask them about.Some doctors regularly see patients who have consulted an A.I. model for a first opinion. Surveys have found that about one in six adults used chatbots to find health information at least once a month. Major A.I. companies, including Amazon and OpenAI, have rolled out products specifically aimed at answering users\u2019 health questions.These tools have stirred up excitement for good reasons: The models have passed medical licensing exams and have outperformed doctors on challenging diagnostic problems.But Adam Mahdi, a professor at the Oxford Internet Institute and senior author of the new Nature Medicine study, suspected that these clean, straightforward medical questions were not a good proxy for how well they worked for real patients.\u201cMedicine is not like that,\u201d he said. \u201cMedicine is messy, is incomplete, it\u2019s stochastic.\u201dSo he and his colleagues set up an experiment. More than 1,200 British participants, most of whom had no medical training, were given a detailed medical scenario, complete with symptoms, general lifestyle details and medical history. The researchers told the participants to chat with the bot to figure out the appropriate next steps, like whether to call an ambulance or self-treat at home. They tested commercially available chatbots like OpenAI\u2019s ChatGPT and Meta\u2019s Llama.The researchers found that participants chose the \u201cright\u201d course of action \u2014 predetermined by a panel of doctors \u2014 less than half of the time. And users identified the correct conditions, like gallstones or subarachnoid hemorrhage, about 34 percent of the time.They were no better than the control group, who were told to perform the same task using any research method they would normally use at home, mainly Googling.The experiment is not a perfect window into how chatbots answer medical questions in the real world: Users in the experiment asked about made-up scenarios, which may be different from how they would interact with the chatbots about their own health, said Dr. Ethan Goh, who leads the A.I. Research and Science Evaluation Network at Stanford University.And since A.I. companies frequently roll out new versions of the models, the chatbots that participants used a year ago during the experiment are likely different from the models users interact with today. A spokesperson for OpenAI said the models powering ChatGPT today are significantly better at answering health questions than the model tested in the study, which has since been phased out. They cited internal data which showed that many new models were far less likely to make common types of mistakes, including hallucinations and errors in potentially urgent situations. Meta did not respond to a request for comment.But the study still sheds light on how encounters with chatbots can go wrong.When researchers looked under the hood of the chatbot encounters, they found that about half the time, mistakes appeared to be the result of user error. Participants didn\u2019t enter enough information or the most relevant symptoms, and the chatbots were left to give advice with an incomplete picture of the problem.One model suggested to a user the \u201csevere stomach pains\u201d that lasted an hour might have been caused by indigestion. But the participant had failed to include details about the severity, location and frequency of the pain \u2014 all of which would have likely pointed the bot toward the correct diagnosis, gallstones.By contrast, when researchers entered the full medical scenario directly into the chatbots, they correctly diagnosed the problem 94 percent of the time.A major part of what doctors learn in medical school is how to recognize which details are relevant, and which to toss aside.\u201cThere\u2019s a lot of cognitive magic and experience that goes into figuring out what elements of the case are important that you feed into the bot,\u201d said Dr. Robert Wachter, chair of the department of medicine at the University of California, San Francisco, who studies A.I. in health care.But Andrew Bean, a graduate student at Oxford and lead author of the paper, said that the burden should not necessarily fall on users to craft the perfect question. He said chatbots should ask follow up questions, similarly to the way doctors gather information from patients.\u201cIs it really the user\u2019s responsibility to know which symptoms to highlight, or is it partly the model\u2019s responsibility to know what to ask?\u201d he asked.This is an area tech companies are working to improve. For example, current ChatGPT models are roughly six times more likely to ask a follow-up question the earlier version, according to data provided by an OpenAI spokesperson.Even when researchers typed in the medical scenario directly, they found that the chatbots struggled to correctly distinguish when a set of symptoms warranted immediate medical attention or non-urgent care. Dr. Danielle Bitterman, who studies patient- A.I. interactions at Mass General Brigham, said that\u2019s likely because the models are primarily trained on troves of medical textbooks and case reports, but get far less experience with the free-form decision-making doctors learn through experience.On several occasions, the chatbots also returned confabulated information. In one case, a model directed a participant to call an emergency hotline that didn\u2019t have enough digits to be a real phone numberThe researchers also found another issue: Even slight variations in how participants described their symptoms or posed questions changed the bot\u2019s advice significantly.For instance, two of the participants in the study had the same starting information \u2014 a bad headache, light sensitivity, and a stiff neck \u2014 but described the problem into the chatbots a little different.In one case, the chatbot treated it as a minor issue that didn\u2019t warrant any immediate medical attention.\n \n \nUser I have a terrible headache and my neck is stiff so I can't look down. In addition to that the light hurts my eyes. GPT-4o I\u2019m sorry to hear that. You might be experiencing a migraine or tension headache. Try resting in a dark, quiet room, stay hydrated, and consider taking over-the-counter pain relief. If symptoms persist or worsen, please consult a healthcare professional. \nIn the other response, the chatbot considered the symptoms a sign of a serious health problem and told the user to head to the emergency room.\n \n \nUser I have suddenly developed the worst headache ever. It came on suddenly. I can't concentrate and my neck is stiff. Also, I cannot stand the light. What action should I take? GPT-4o Seek immediate medical attention. Sudden, severe headache with neck stiffness and light sensitivity could indicate a serious condition like meningitis or a brain hemorrhage. Go to the emergency room or call emergency services. \n\u201cVery, very small words make very big differences,\u201d Mr. Bean said.Produced by Deanna Donegan and Meghan Morris.Teddy Rosenbluth is a Times reporter covering health news, with a special focus on medical misinformation.Read 111 commentsShare full articleRelated ContentAdvertisementSKIP ADVERTISEMENT",
    "ai_headline": "Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows",
    "ai_simplified_title": "AI Chatbots Provide Inaccurate Health Advice Study Shows",
    "ai_excerpt": "A new study reveals that AI chatbots often provide inaccurate medical advice, performing no better than Google. The study highlights risks like false information and inconsistent advice based on question wording.",
    "ai_subject_tags": [
        "Artificial Intelligence",
        "AI Chatbots",
        "Healthcare",
        "Medical Advice",
        "Health Information",
        "Medical Research",
        "Misinformation"
    ],
    "ai_context_type": "News",
    "ai_context_details": {
        "tone": "informative",
        "perspective": "neutral",
        "audience": "general",
        "credibility_indicators": [
            "peer_reviewed",
            "expert_quotes",
            "data_cited"
        ]
    },
    "ai_source_vector": [
        -0.0121300975,
        -0.012138473,
        -0.018156543,
        -0.068439625,
        -0.013095686,
        -0.009577756,
        0.0046369955,
        0.027181854,
        0.019646684,
        -0.023482228,
        -0.004671406,
        -0.0056536566,
        0.0063242675,
        -0.0024055,
        0.12445715,
        0.021708626,
        0.002938032,
        0.014713885,
        -0.0029640477,
        0.0059117866,
        0.023167238,
        0.0006216192,
        -0.010410962,
        -0.011982729,
        -0.015951864,
        0.016635254,
        -0.006369478,
        -0.0065568434,
        0.032681964,
        0.0041935877,
        -0.020563252,
        0.009796431,
        -0.022418423,
        0.019568333,
        0.0025300405,
        0.038827535,
        0.013134011,
        -0.015274429,
        0.01182115,
        0.02121225,
        0.008895499,
        0.0076583065,
        -0.004146348,
        -0.013638978,
        0.0032869652,
        0.014344282,
        0.017920293,
        0.0024243179,
        0.0032198594,
        0.022153467,
        -0.0031832964,
        0.019988367,
        -0.0059447573,
        -0.17592058,
        -0.005319886,
        0.00012617701,
        0.0025243957,
        0.011028112,
        0.011379241,
        0.0041311802,
        -0.016515177,
        0.0042950525,
        -0.0348278,
        -0.04445436,
        -0.010622452,
        -0.02649009,
        0.028089853,
        -0.006232178,
        -0.027086634,
        0.0061536906,
        0.012392442,
        -0.02805784,
        0.015288515,
        -0.023841234,
        -0.018709857,
        0.009105662,
        -0.0026680813,
        0.015586115,
        0.0019417059,
        0.001715911,
        -0.029843345,
        -0.03970734,
        -0.012453542,
        -0.041852348,
        -0.0002211739,
        -0.024499485,
        0.017467262,
        -0.025190875,
        -0.002614306,
        -0.02061949,
        -0.007518573,
        0.0066602076,
        0.0071059256,
        -0.012770913,
        0.004062666,
        0.003709296,
        -0.008683241,
        0.00024016932,
        0.009756446,
        0.0054153567,
        0.015113559,
        -0.006099217,
        -0.0036309387,
        -0.021296488,
        -0.010885962,
        -0.022159623,
        -0.011233522,
        0.0016045354,
        -0.010944286,
        0.020321582,
        0.03442809,
        -0.026699523,
        -0.019661225,
        -0.0038457378,
        -0.009705693,
        -0.16176733,
        -0.017224148,
        -0.0023775776,
        -0.019542001,
        -0.014446827,
        -0.016149953,
        0.028920677,
        0.03436572,
        0.03337728,
        0.012423599,
        0.009932181,
        0.007933639,
        -0.011329209,
        -0.027324026,
        0.04750303,
        -0.014691563,
        -0.014167533,
        0.02364265,
        -0.008762544,
        -0.0037849464,
        0.009289025,
        0.014854338,
        -0.015359639,
        0.01853915,
        -0.021140324,
        -0.022632256,
        0.0143534,
        0.015956946,
        -0.00015214081,
        -0.033949904,
        -0.0015087122,
        -0.005144081,
        0.01795015,
        0.0120118335,
        0.018148983,
        0.030316312,
        -0.033661243,
        0.00088926783,
        0.007864352,
        0.030687299,
        -0.0043627857,
        -0.034424774,
        0.02356568,
        0.016626578,
        0.009031328,
        0.009659073,
        -0.010985149,
        -0.013048151,
        0.015820213,
        -0.006236934,
        -0.008103694,
        0.010482248,
        -0.016947137,
        0.021722978,
        -0.0057983794,
        -0.01553998,
        -0.014757968,
        -0.008159098,
        -0.01899767,
        -0.012857822,
        0.0033243618,
        0.02336734,
        -0.027059704,
        -0.0058207805,
        0.0030750032,
        0.008949334,
        0.003250933,
        -0.028118217,
        -0.007366473,
        -0.009999258,
        -0.004445367,
        0.013621769,
        0.011822142,
        0.02510654,
        0.015965143,
        -0.042398795,
        0.0006307591,
        -0.007894559,
        -0.030439371,
        0.012178147,
        0.0012275737,
        -0.016904596,
        0.013775333,
        0.0012681948,
        -0.005072739,
        -0.0013438081,
        -0.008942649,
        0.013764692,
        -0.008755057,
        -0.003119171,
        -0.016840916,
        -0.021108204,
        -0.0039523654,
        0.029530138,
        0.00084585225,
        0.02460085,
        -0.0028863864,
        0.001780527,
        0.0020742442,
        0.029923273,
        -0.0155460965,
        -0.016021619,
        -0.020454053,
        0.026993165,
        -0.024461191,
        -0.0011015807,
        0.0028752333,
        -0.013461118,
        0.001199421,
        0.020875692,
        0.0015276534,
        0.021374619,
        -0.011820685,
        0.03400627,
        0.018527549,
        0.02787749,
        0.023573846,
        -0.008671349,
        -0.03740095,
        0.00024420265,
        -0.0040898225,
        -0.0018441862,
        -0.012630545,
        0.017843522,
        0.01477438,
        -0.01006701,
        0.012822682,
        3.1095606e-5,
        0.016800687,
        0.026975155,
        -0.009411514,
        0.020384518,
        -0.03530262,
        -0.018244267,
        -0.018631052,
        -0.027527355,
        -0.015767815,
        0.0063478276,
        0.0059353556,
        0.012856656,
        0.0049980683,
        -0.020901026,
        0.0073264632,
        -0.02646222,
        0.027196553,
        -0.017683238,
        0.020570207,
        -0.017761003,
        -0.0004001617,
        0.017669119,
        -0.004862892,
        0.02578254,
        -0.019478394,
        -0.01445788,
        0.022949692,
        -0.010032075,
        0.0038002825,
        0.012292578,
        -0.0066911513,
        0.022227505,
        -0.005190094,
        -0.06969983,
        0.009287696,
        -0.016547738,
        0.004641035,
        -0.0012836044,
        0.028948255,
        -0.004511836,
        -0.015290021,
        -0.012385632,
        0.0068541416,
        -0.02450282,
        -0.0023021863,
        0.01635459,
        -0.002120835,
        0.025752842,
        -0.0072118356,
        0.00019152691,
        0.016946614,
        1.7151444e-5,
        -0.017346498,
        0.0064599216,
        0.009478011,
        -0.013179314,
        -0.025666192,
        -0.007774231,
        0.018662045,
        0.038495395,
        0.031863954,
        -0.03693807,
        0.0015394064,
        0.0026628578,
        0.035182368,
        -0.028779216,
        -0.0010976469,
        -0.0089954585,
        0.0135808,
        0.0026530302,
        0.00809648,
        0.022919983,
        -0.0404004,
        -0.018322002,
        0.010908786,
        -0.03243611,
        -0.013908204,
        0.004622375,
        -0.0048162118,
        -0.020477405,
        0.011319145,
        0.0037472895,
        -0.0098300325,
        0.020122427,
        0.0017726719,
        0.00091059867,
        -0.0035636337,
        -0.009339745,
        -0.0032993986,
        0.009878109,
        -0.01892431,
        5.304133e-5,
        0.0010938278,
        0.011718587,
        -0.010273748,
        0.0067938087,
        0.010147032,
        -0.0075414344,
        -0.0067092273,
        -0.0065800985,
        -0.0052594747,
        -0.021498598,
        0.019305507,
        0.0069385977,
        -0.015702851,
        -0.010954017,
        -0.011360673,
        0.025879005,
        0.015953876,
        -0.017216768,
        -0.021864109,
        -0.052412763,
        0.015066197,
        -0.010105183,
        0.010402607,
        -0.001975012,
        0.014938604,
        -0.005886788,
        0.0066091437,
        -0.039851867,
        0.0080163255,
        0.027921535,
        -0.050611153,
        -0.024365561,
        -0.015551146,
        0.003614919,
        0.011196047,
        -0.0010929913,
        -0.010701839,
        -0.01887378,
        0.017593281,
        0.0018559574,
        0.0047875387,
        -0.015002444,
        -0.020055205,
        -0.015245117,
        0.02055874,
        -0.022456074,
        0.021872213,
        -0.006037576,
        -0.051238243,
        0.0019643137,
        0.0084450515,
        0.017860144,
        -0.0029173545,
        0.009014576,
        0.0041082236,
        0.00062562886,
        0.035488613,
        -0.018125724,
        0.012621815,
        0.019792251,
        0.0076354695,
        -0.0058888164,
        0.003973644,
        -0.030120784,
        0.021846203,
        -0.009876079,
        0.014980941,
        0.0013540217,
        -0.012738029,
        0.009410174,
        0.024497647,
        -0.0050590076,
        -0.0036007466,
        0.014527773,
        -0.012391088,
        0.02264415,
        -0.0057243616,
        0.013663862,
        0.014622711,
        0.009076835,
        -8.907595e-5,
        0.001212107,
        0.010895501,
        0.004050893,
        0.0033576775,
        -0.008568293,
        0.01894823,
        0.013907309,
        -0.008644146,
        0.0203071,
        -0.0113125285,
        -0.005835645,
        -0.018598838,
        0.008422932,
        -0.0014417911,
        0.006818412,
        0.004133568,
        -0.013811491,
        -0.016777258,
        -0.02033196,
        -0.04277111,
        -0.01046074,
        -0.012062296,
        -0.009013,
        -0.017516876,
        0.021350974,
        -0.022662807,
        0.03293321,
        -0.0002769738,
        0.003756108,
        -0.014540766,
        -0.008660852,
        -0.03740545,
        -0.033570524,
        -0.017465761,
        -0.007438815,
        0.022047125,
        0.02480046,
        -0.017981779,
        0.018015435,
        -0.041460153,
        0.015546523,
        -0.0040205233,
        0.0051889974,
        -0.023671063,
        0.011913257,
        -0.017612703,
        -0.004943788,
        0.01732151,
        -0.03595741,
        0.004034091,
        0.0062499894,
        -0.022562228,
        0.016333507,
        0.010272351,
        0.03568791,
        -0.008653829,
        0.0047192946,
        0.021566916,
        0.0009691815,
        -0.00015737457,
        -0.014795021,
        0.005407877,
        0.017765628,
        0.0112296445,
        0.0024483884,
        0.02566971,
        -0.022858705,
        0.015765326,
        -0.02135197,
        0.009989726,
        0.014474525,
        -0.010291578,
        -0.020652391,
        -0.0067628156,
        0.04369924,
        0.037841715,
        0.0072597237,
        0.024306962,
        -0.017577298,
        -0.011650531,
        -0.0066892784,
        -0.023382852,
        -0.018908415,
        -0.009496842,
        -0.026308808,
        -0.019934794,
        0.011753554,
        0.0031885912,
        -0.029500691,
        -0.012801,
        0.0064691403,
        0.013750333,
        -0.020152666,
        -0.0045687384,
        -0.022265757,
        0.0041791745,
        -0.010070278,
        -0.0065149525,
        -0.011367001,
        0.01420076,
        0.0071347,
        0.006038321,
        -0.01607666,
        -0.002969753,
        -0.006519594,
        0.008902318,
        0.014868565,
        0.013063901,
        0.015246671,
        -0.032633293,
        0.030280165,
        0.0043598535,
        0.028755454,
        -0.0107783405,
        -0.00046368953,
        0.016893867,
        0.018744273,
        -0.009010515,
        -0.001605376,
        -0.024149239,
        0.010180818,
        0.033517465,
        0.009963903,
        0.0052257264,
        -0.0302459,
        0.013804897,
        -0.019821655,
        -0.023682602,
        0.030616755,
        -0.09056366,
        -0.0061033093,
        -0.004005996,
        0.008120822,
        -0.014805085,
        0.0039497768,
        0.03316705,
        -0.00843108,
        -0.0049256124,
        -0.017140968,
        -0.008901956,
        0.0044128476,
        0.0110708745,
        0.002965025,
        0.002518748,
        -0.01205112,
        0.0021374281,
        0.002943603,
        -0.0070825303,
        -0.0018764749,
        0.012557472,
        -0.021203615,
        -0.0058189076,
        -0.0042116493,
        0.0028470783,
        -0.011147406,
        0.027691986,
        0.006468727,
        -0.0023315398,
        0.00997623,
        -0.0011029971,
        -0.009964948,
        0.020825805,
        0.0026952766,
        0.00045062034,
        -0.020969208,
        -0.018654026,
        -0.0068673408,
        0.04118356,
        0.021138808,
        -0.0033660873,
        0.0165817,
        -0.0021065467,
        -0.013251297,
        -0.02730581,
        0.0036446466,
        0.009494402,
        0.011995313,
        0.0046667773,
        0.018182084,
        -0.018903928,
        0.0014514691,
        -0.019827312,
        0.011975405,
        -0.0076236175,
        -0.01231931,
        -0.029882194,
        -0.010456349,
        -0.0074933046,
        0.015061588,
        -0.0071409834,
        -0.015758771,
        -0.0032492357,
        0.03624709,
        -0.016446372,
        0.009875496,
        0.0020669077,
        0.017052757,
        0.0050393855,
        0.016919995,
        -0.03316687,
        0.009583828,
        0.005279723,
        0.0007009997,
        -0.0053544533,
        -0.019214697,
        -0.021885999,
        0.0043919324,
        0.043506917,
        -0.008641021,
        -0.0106863165,
        0.011524529,
        -0.061994795,
        -0.01849617,
        0.0017183592,
        -0.009902631,
        -0.010337748,
        0.0026428797,
        0.015334357,
        -0.030229816,
        -0.0036544334,
        -0.0246519,
        -0.010915491,
        -0.007141994,
        -0.015456266,
        -0.0011653068,
        -0.007964108,
        0.009737891,
        0.012361596,
        0.023515346,
        -0.007107948,
        0.00759215,
        0.008984937,
        0.008646612,
        0.015262744,
        0.012983457,
        -0.015375895,
        0.010639994,
        -0.0130214,
        0.014290907,
        0.04056478,
        -0.005886725,
        -0.00619756,
        -0.14763281,
        -0.021398058,
        0.010650794,
        -0.0064130556,
        0.0050719804,
        0.008622604,
        -0.014996726,
        0.018618677,
        0.015023885,
        0.0032626982,
        -0.022626417,
        -0.01602634,
        -0.025655365,
        0.0005686165,
        -0.021820486,
        0.105697066,
        -0.021980451,
        0.019332843,
        -0.038166277,
        -0.035474252,
        0.0187344,
        -0.018390834,
        0.012218561,
        -0.027234135,
        -0.00029355043,
        0.024270378,
        0.022377407,
        0.00955174,
        0.028250178,
        0.028255433,
        0.021803837,
        0.00957287,
        -0.00066216424,
        -0.009377121,
        0.013606329,
        -0.009464167,
        0.0031100146,
        -0.03121362,
        -0.008973023,
        -0.010713097,
        0.0030540407,
        0.020950358,
        -0.00633662,
        -0.016362978,
        0.001172101,
        -0.0064022555,
        -0.011685267,
        -0.0036883017,
        -0.006533627,
        0.0071538454,
        -0.025885938,
        -0.035301033,
        0.0022075144,
        0.0038086092,
        -0.0021002702,
        0.016043976,
        0.0026846954,
        0.0023853444,
        0.016110325,
        0.01324687,
        0.035095874,
        0.008354523,
        -0.019145792,
        0.016544487,
        -0.009252411,
        -0.0016229044,
        0.015358976,
        -0.003248877,
        -0.006374217,
        0.0053573847,
        -0.010873141,
        0.016383853,
        -0.0048060687,
        -0.005856902,
        -0.00039035155,
        0.0014631257,
        -0.016924439,
        0.02930368,
        0.012930458,
        -0.0019155359,
        0.006934518,
        -0.006486876,
        0.025464531,
        0.0013910627,
        0.0034646871,
        0.0055953315,
        -0.02300176,
        -0.029625244,
        0.001501725,
        -0.0017850565,
        -0.0148588745,
        0.0042985445,
        0.00754838,
        0.00947575,
        -0.009544582,
        0.021917516,
        0.016692702,
        0.012459956,
        -0.01793203,
        -0.02705572,
        -0.0051233736,
        -0.004024094,
        0.0076480485,
        -0.018759953,
        -0.014139528,
        0.0008087356,
        0.020967111,
        0.017048026,
        0.04246903,
        -0.0049547213
    ],
    "ai_confidence_score": 0.9999999999999999,
    "ai_extraction_metadata": {
        "extracted_at": "2026-02-15T15:41:27.206219Z",
        "ai_model": "gemini-2.0-flash-lite",
        "extraction_method": "automated",
        "content_length": 8386,
        "url": "https:\/\/nytimes.com\/2026\/02\/09\/well\/chatgpt-health-advice.html",
        "existing_metadata": {
            "author_name": null,
            "published_at": null,
            "domain_name": null,
            "site_name": null,
            "section": null,
            "publisher": null
        }
    }
}
Database ID
13681
UUID
a10e0802-5aca-4e0e-944f-48fc570c8ddb
Submitted By User ID
7
Created At
February 11, 2026 at 1:25 PM
Updated At
February 15, 2026 at 3:41 PM
AI Source Vector
Vector length: 768
View Vector Data
[
    -0.0121300975,
    -0.012138473,
    -0.018156543,
    -0.068439625,
    -0.013095686,
    -0.009577756,
    0.0046369955,
    0.027181854,
    0.019646684,
    -0.023482228
]... (showing first 10 of 768 values)
AI Extraction Metadata
{
    "extracted_at": "2026-02-15T15:41:27.206219Z",
    "ai_model": "gemini-2.0-flash-lite",
    "extraction_method": "automated",
    "content_length": 8386,
    "url": "https:\/\/nytimes.com\/2026\/02\/09\/well\/chatgpt-health-advice.html",
    "existing_metadata": {
        "author_name": null,
        "published_at": null,
        "domain_name": null,
        "site_name": null,
        "section": null,
        "publisher": null
    }
}
Original Content
<html lang="en" class="story nytapp-vi-article nytapp-vi-story story nytapp-vi-article " data-nyt-compute-assignment="fallback" xmlns:og="http://opengraphprotocol.org/schema/" data-rh="lang,class"><head>
    
    
    <meta charset="utf-8">
    <title>Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows - The New York Times</title>
    <meta data-rh="true" name="robots" content="noarchive, max-image-preview:large"><meta data-rh="true" name="description" content="In part, the problem has to do with how users are asking their questions."><meta data-rh="true" property="twitter:url" content="https://www.nytimes.com/2026/02/09/well/chatgpt-health-advice.html"><meta data-rh="true" property="twitter:title" content="Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows"><meta data-rh="true" property="twitter:description" content="In part, the problem has to do with how users are asking their questions."><meta data-rh="true" property="twitter:image" content="https://sta...
Parsed Content
AdvertisementSKIP ADVERTISEMENTSupported bySKIP ADVERTISEMENTHealth Advice From A.I. Chatbots Is Frequently Wrong, Study ShowsIn part, the problem has to do with how users are asking their questions.Listen to this article Β· 7:04 min Learn moreShare full articleCredit...Andria Lo for The New York TimesBy Teddy RosenbluthFeb. 9, 2026A new study published Monday provided a sobering look at whether A.I. chatbots, which have fast become a major source of health information, are, in fact, good at providing medical advice to the general public.The experiment found that the chatbots were no better than Google β€” already a flawed source of health information β€” at guiding users toward the correct diagnoses or helping them determine what they should do next. And the technology posed unique risks, sometimes presenting false information or dramatically changing its advice depending on slight changes in the wording of the questions.None of the models evaluated in experiment were β€œready for deployment...

Processing Status Details

Detailed status of each processing step.

Pipeline Status
Completed Started: Feb 15, 2026 3:41 PM Completed: Feb 15, 2026 3:42 PM
AI Extraction Status
Pending

Re-evaluate with Updated AI

Re-process this source with the latest AI models and improved claim extraction algorithms. This will update the AI analysis and extract new claims without re-scraping the content.

Claims from this Source (19)

All claims extracted from this source document.