Sciencemadness Discussion Board
Not logged in [Login ]
Go To Bottom

Printable Version  
Author: Subject: Proposed posting convention for dealing with AI
j_sum1
Administrator
********




Posts: 6334
Registered: 4-10-2014
Location: At home
Member Is Offline

Mood: Most of the ducks are in a row

[*] posted on 13-7-2023 at 00:22
Proposed posting convention for dealing with AI


We have had a few threads recently on the topic of chatgpt and other AI and their unsuitability for informing users on chemical synthesis. The topic also came up on our newly-revived comment box thread as well as in a thread on nitroethane. (Link to show suggested format.)

It may occasionally be useful to cite an AI procedure for the purposes of critical analysis, mockery, or discussion on the societal implications of such things. I believe we need an established convention for this practice in order to differentiate between AI generated text and legitimate text and to highlight its dubious nature.

I propose the following:
  • AI text is put in a quote box
  • AI text is prefaced with a heading such as quote from chatgpt
  • AI text is coloured orange or brown using the [ color= ] tag

    All of this would be spelled out in the FAQ and regular users adopt the practice.

    I put this suggestion out there for open discussion and mod discussion. And if there is general agreement or agreed modifications to the idea we can take it on board.

    What do you all think?

    [Edit to add link]

    [Edited on 13-7-2023 by j_sum1]
  • View user's profile View All Posts By User
    woelen
    Super Administrator
    *********




    Posts: 8027
    Registered: 20-8-2005
    Location: Netherlands
    Member Is Offline

    Mood: interested

    [*] posted on 13-7-2023 at 00:57


    I agree with j_sum1. Making it explicit that the text is a copy of AI-generated text in a clearly visible and uniform way, across the board, would be a very good thing. J_sum1's proposal to use a quote-box and a reddish/brown color for the text inside the box is fine.



    The art of wondering makes life worth living...
    Want to wonder? Look at https://woelen.homescience.net
    View user's profile Visit user's homepage View All Posts By User
    B(a)P
    International Hazard
    *****




    Posts: 1139
    Registered: 29-9-2019
    Member Is Offline

    Mood: Festive

    [*] posted on 13-7-2023 at 03:20


    Totally agree with this approach!

    Quote:
    Quote from Bard
    Chatbots are bad at science for a number of reasons, including:

    They lack the ability to understand complex scientific concepts. Chatbots are trained on large datasets of text, but this text is typically not scientific in nature. This means that chatbots may not have the necessary understanding of scientific concepts to answer your questions accurately.
    They are not able to critically evaluate scientific information. Chatbots are not able to understand the nuances of scientific research, and they may not be able to distinguish between good and bad science. This means that they may provide you with incorrect or misleading information.
    They are not able to keep up with the latest scientific research. Scientific research is constantly evolving, and chatbots are not able to keep up with the latest findings. This means that they may provide you with outdated or inaccurate information.
    In addition to these limitations, chatbots are also susceptible to bias. This is because they are trained on data that is created by humans, and this data may reflect the biases of the people who created it. This means that chatbots may provide you with information that is biased or inaccurate.

    For these reasons, it is important to be critical of the information that chatbots provide you with. If you are looking for accurate and reliable scientific information, it is best to consult with a human expert.

    Here are some additional tips for evaluating the scientific information that a chatbot provides you with:

    Ask the chatbot to provide you with the sources of its information. This will allow you to verify the accuracy of the information.
    Be aware of the chatbot's limitations. Chatbots are not able to understand complex scientific concepts, and they may not be able to keep up with the latest scientific research.
    Consult with a human expert if you need more information. A human expert will be able to help you evaluate the scientific information that the chatbot provides you with.


    :D
    View user's profile View All Posts By User
    arkoma
    Redneck Overlord
    *******




    Posts: 1763
    Registered: 3-2-2014
    Location: On a Big Blue Marble hurtling through space
    Member Is Offline

    Mood: украї́нська

    [*] posted on 13-7-2023 at 08:56


    Personally, I think there is some danger in allowing this crap to be "forum wide". Could be mistaken for legit info, especially by non-member casual users. I am quite proud to be affiliated with an organization of folks that consistently has "first page" hits when googling chem questions (how i found scimad some years ago), and if in a scrapable subforum this crap could end up in those results.

    my $.02




    "We believe the knowledge and cultural heritage of mankind should be accessible to all people around the world, regardless of their wealth, social status, nationality, citizenship, etc" z-lib

    View user's profile View All Posts By User
    Texium
    Administrator
    ********




    Posts: 4618
    Registered: 11-1-2014
    Location: Salt Lake City
    Member Is Offline

    Mood: PhD candidate!

    [*] posted on 13-7-2023 at 09:14


    I agree with you, arkoma. I stand by my position that any AI-generated procedures not posted in Whimsy for the purpose of ridicule be moved either to Whimsy (to be ridiculed) or to Detritus.



    Come check out the Official Sciencemadness Wiki
    They're not really active right now, but here's my YouTube channel and my blog.
    View user's profile Visit user's homepage View All Posts By User
    fusso
    International Hazard
    *****




    Posts: 1922
    Registered: 23-6-2017
    Location: 4 ∥ universes ahead of you
    Member Is Offline


    [*] posted on 13-7-2023 at 10:19


    Or start a new subforum dedicated to roasting AI answers? Just mentioning, it may not have enough posts to worth opening.
    View user's profile View All Posts By User
    j_sum1
    Administrator
    ********




    Posts: 6334
    Registered: 4-10-2014
    Location: At home
    Member Is Offline

    Mood: Most of the ducks are in a row

    [*] posted on 13-7-2023 at 14:36


    Tex and arkoma, I don't share your concerns. Or maybe I misunderstand them. The way I see it, there are two possible drawbacks to leaving AI text out in the open.

    1. It is discoverable by bots which AI use to harvest information for their own learning. This obviously is circular and cannot lead to improvements in AI. It quite possibly contributes to the repetitive dribble we see in AI text. It is possible that repetitive dribble generated from AI is the result of self-referral. Self-referential information could possibly lead to repetitive dribble.
    My thoughts are that this is not our concern. Nor is this site likely to have a measurable effect in the grand scheme of things.

    2. It is mistaken for reliable information by the ignorant and uninitiated.
    To address this, I suggest that it is counterproductive to hide AI text. It is better to have it out in the open and easily identifiable for what it is. That way we are clearly showing the limitations of the technology. This will have the effect of educating people in a way that detritus-izing won't. And, on the off-chance that AI improves in this context, we don't have to change policy, and we leave a footprint of how the tech has developed -- which again helps people to assess its validity and usefulness.


    [edit]
    It would be good to have some more input / opinions of members on this matter.

    [Edited on 13-7-2023 by j_sum1]
    View user's profile View All Posts By User
    Admagistr
    Hazard to Others
    ***




    Posts: 373
    Registered: 4-11-2021
    Location: Central Europe
    Member Is Offline

    Mood: The dreaming alchemist

    [*] posted on 13-7-2023 at 14:56


    AI provides a very dangerous kind of information,because it mixes truth and scientific facts with nonsense.If it improves and is not perfect,it will be even more dangerous.Then it will be hard to tell that it is wrong or has made something up.If someone uses its advice for example in the synthesis of energetic materials,or in a procedure with dangerous reactants,it can turn out very badly.That is why contributions from AI should be hidden,or preferably moved to the trash.
    View user's profile View All Posts By User
    B(a)P
    International Hazard
    *****




    Posts: 1139
    Registered: 29-9-2019
    Member Is Offline

    Mood: Festive

    [*] posted on 13-7-2023 at 15:33


    I don't think it warrants a new sub-forum.
    I also don't think that it all needs to go to detritus.
    There are plenty of inaccurate or incorrect statements on this forum that have not been sent to detritus.
    If it is clearly quoted and posted in the agreed format, to me that is enough.
    In saying all of this, if someone opened a thread stating they have a new novel method to prepare x compound copied straight AI then it belongs in detritus. I don't think this is any different to someone making a baseless claim without context.
    View user's profile View All Posts By User
    Bedlasky
    International Hazard
    *****




    Posts: 1242
    Registered: 15-4-2019
    Location: Period 5, group 6
    Member Is Offline

    Mood: Volatile

    [*] posted on 13-7-2023 at 15:42


    I agree with j_sum1 that if anyone post something generated by AI, it must be clear that it is text generated by AI. But I share some concerns with Texium and Arcoma. Honestly, you cannot use AI for serious discussion, AI texts don't make any sense. If someone from some GOOD reason decide to post something AI generated, it should be highlighted as j_sum suggested.
    View user's profile View All Posts By User
    Texium
    Administrator
    ********




    Posts: 4618
    Registered: 11-1-2014
    Location: Salt Lake City
    Member Is Offline

    Mood: PhD candidate!

    [*] posted on 13-7-2023 at 16:36


    Ok, maybe it doesn’t need to be nearly as heavy-handed as I describe. My feeling is similar to what Bedlasky said. AI text can’t be taken seriously. It often contains so many errors that the time it takes to fact-check, debunk, and thoroughly explain why it is wrong is just not worth it. It invites entire new layers spoonfeeding, because now instead of merely asking for the answer to one question, they’ll ask the AI, which opens up so many more questions since the answer will inevitably have multiple inconsistencies that need to be pointed out. Like responding to an Yttrium2 thread. So the options are (1) to explain those inconsistencies (effectively, spoonfeeding) or (2) throw it in the trash where it belongs.

    So I suppose if someone is willing to put in some effort and post an AI procedure in which they have already flagged the obviously problematic things (e.g. “recrystallize your ethanol”) rather than just copy and paste with a general “is this right?” question, it could be acceptable. Likewise, if someone attempted to actually follow an AI procedure, it would make sense to post it alongside their results, clearly noting what nonsensical parts they changed or chose to ignore. Anything less than that should be treated as the worst form of spoonfeeding request.




    Come check out the Official Sciencemadness Wiki
    They're not really active right now, but here's my YouTube channel and my blog.
    View user's profile Visit user's homepage View All Posts By User
    arkoma
    Redneck Overlord
    *******




    Posts: 1763
    Registered: 3-2-2014
    Location: On a Big Blue Marble hurtling through space
    Member Is Offline

    Mood: украї́нська

    [*] posted on 13-7-2023 at 19:33


    Quote: Originally posted by j_sum1  


    1. It is discoverable by bots which AI use to harvest information for their own learning. This obviously is circular and cannot lead to improvements in AI. It quite possibly contributes to the repetitive dribble we see in AI text. It is possible that repetitive dribble generated from AI is the result of self-referral. Self-referential information could possibly lead to repetitive dribble.
    My thoughts are that this is not our concern. Nor is this site likely to have a measurable effect in the grand scheme of things.


    Exactly one of my concerns if its scrapeable here, and it kinda does merit our concern I think. We're stuck with AI, for bad or good, lets at least not help it get GarbageInWorseGarbageOut. Analogous sort of to an inquisitive, extroverted child with no way to do real world experimentation that wants to show off and disseminate its knowledge needs as good a bunch of data to hoover as we as humans can give it.

    my other $.02




    "We believe the knowledge and cultural heritage of mankind should be accessible to all people around the world, regardless of their wealth, social status, nationality, citizenship, etc" z-lib

    View user's profile View All Posts By User
    j_sum1
    Administrator
    ********




    Posts: 6334
    Registered: 4-10-2014
    Location: At home
    Member Is Offline

    Mood: Most of the ducks are in a row

    [*] posted on 13-7-2023 at 20:39


    You're up to four cents now arkoma. Don't blow the budget. :P
    View user's profile View All Posts By User
    Tsjerk
    International Hazard
    *****




    Posts: 3032
    Registered: 20-4-2005
    Location: Netherlands
    Member Is Offline

    Mood: Mood

    [*] posted on 13-7-2023 at 23:47


    Sending all AI stuff to detritus sounds a bit harsh, that would mean there is no discussion possible about actual texts generated by AI. I think the solution suggested in the starting post is a fair middle way.

    [Edited on 14-7-2023 by Tsjerk]
    View user's profile View All Posts By User
    SplendidAcylation
    Hazard to Others
    ***




    Posts: 203
    Registered: 28-10-2018
    Location: Starving in some deep mystery
    Member Is Offline

    Mood: No one I think is in my tree.

    [*] posted on 15-7-2023 at 01:32


    Well, I tend to agree with Arkoma on this one, although I think j_sum1's proposal is quite reasonable.

    Personally I hate these new fangled chatbots, I have never used them, and don't intend to, but who knows how long I can avoid it before they are widespread and unavoidable.

    I hate them because I am already sick of people I know talking about them endlessly and quoting poetry written by them, etc, I just have a bad feeling about the whole thing.

    That said, I cannot back-up my emotions with a clearly rationalized argument, so as I said, the middle-ground seems quite sufficient.

    I would also say that the new rules should continue to apply if/when chat-bots become a lot smarter than us, and not simply as a means to counter misinformation.

    View user's profile View All Posts By User
    arkoma
    Redneck Overlord
    *******




    Posts: 1763
    Registered: 3-2-2014
    Location: On a Big Blue Marble hurtling through space
    Member Is Offline

    Mood: украї́нська

    [*] posted on 15-7-2023 at 20:48


    Quote: Originally posted by j_sum1  
    You're up to four cents now arkoma. Don't blow the budget. :P


    I have empty beer cans I can cash in (been known to neck a few tinnies) so I think it'll be ok!




    "We believe the knowledge and cultural heritage of mankind should be accessible to all people around the world, regardless of their wealth, social status, nationality, citizenship, etc" z-lib

    View user's profile View All Posts By User

      Go To Top