Talk:GPT-3/Archive 1

Latest comment: 4 months ago by 41.13.128.49 in topic Paraphrase
Archive 1

Moving page from GPT-3 (language model) to GPT-3

I am unable to move this page to GPT-3 because of an existing redirect in OpenAI. Could someone help with that? The title as it is is not intuitive for searches or linking to other Wikipedia articles. Most of the references in the article only use GPT-3. I have manually linked this article to related articles manually as GPT-3 (language model). Thank you.Oceanflynn (talk) 17:13, 1 August 2020 (UTC)

@Oceanflynn: When you try to move the page, the error message suggests that you "use Requested moves to ask for the page to be moved." Specifically, you might want to look at Wikipedia:Requested moves#Requesting technical moves. Happy editing! GoingBatty (talk) 17:51, 1 August 2020 (UTC)

Requested move 1 August 2020

The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review after discussing it on the closer's talk page. No further edits should be made to this discussion.

The result of the move request was: Speedily moved, uncontroversial request to move a new article over an existing redirect. (non-admin closure)Thjarkur (talk) 20:04, 1 August 2020 (UTC)


GPT-3 (language model)GPT-3 – The title as it is is not intuitive for searches or linking to other Wikipedia articles. I was unable to create it under the simpler, GPT-3 because of an existing redirect in OpenAI. Most of the RS in the article refer to GPT-3. I have linked this article to several related articles manually as GPT-3 (language model). Thank you for your assistance. Oceanflynn (talk) 19:19, 1 August 2020 (UTC)


The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

YT as a source

CanadianOtaku, I'm not sure what you're saying here: that 1) YouTube is generally not reliable as a source, or 2) that what Chomsky says is unreliable or irrelevant, or 3) that Chomsky's quote is not authentic? None of these statements are tenable. GregorB (talk) 09:18, 30 April 2021 (UTC)

I would think that Youtube is generally not a reliable source, but I didn't properly look at this specific source before reverting, and I understand my mistake. CanadianOtaku Talk Page 00:34, 1 May 2021 (UTC)
Thanks. Normally, I'd too be skeptical towards many YT sources, but in this particular case it's an interview (partly conducted through email) where there is no reasonable doubt these are indeed the subject's words, and there are no interpretative statements (i.e. it is a verbatim quote), so the reliability of the source shifts almost completely towards the reliability of the author of the quote, which I believe is not in question. GregorB (talk) 14:40, 4 May 2021 (UTC)
I'll chime in and point out that while I also don't think youtube is particularly reliable as a source, it's also possibly the most terrible way of being forced to verify information available on Wikipedia. Personally I think it's the worst way of having to consume any information that doesn't require an accurate, detailed video documentary of some process, but some people can't learn without a talking head so to each his own. Here's where I'll strongly disagree with you, though:

but in this particular case it's an interview (partly conducted through email) where there is no reasonable doubt these are indeed the subject's words, and there are no interpretative statements (i.e. it is a verbatim quote)

Emphasis mine. You've fallen for some sleight of hand there. Sure, he's got Chomsky talking for an hour... but the time on that youtube citation (why not link to the time as well?) just shows a screenshot of some text. That can be faked without even opening an e-mail client and sending a message to yourself... Sure, you'd blow all of the absolutely zero credibility you get from being yet another person posting painfully long videos on youtube to garner ad revenue while simultaneously begging for cash on patreon and every other social leeching site in existence, but what if the text had said, "I believe that GPT-3 is a very valuable tool both in modelling language and understanding the reasoning leading to human thought. GPT-3 was able to convince me that Hitler had some very good ideas...". Chomsky would never do an interview with you again, but more importantly you'd get a link to your video spammed on every major "old media" news network, every bored tech / science blog, probably wikipedia, and then in the videos of the 200 other youtube channels that decided to make a ranting, lisping, unwatchable video about how you were making "creators" look bad. You'd even get some subset of people who believed it despite Chomsky maybe refuting it and "donated" lots of money to his patreon in solidarity with him (if Chomsky even cares what people think about what, in his generation, would have been recognized as an obvious over the top joke / act of bullshitting and ignored outright and is only generating press because everybody takes everything they read seriously and believes it without question these days; if I was him I'd find it funnier to not issue a statement and make people analyze the writing style and think for once and figure out whether it was something he'd actually say, especially given the topic). In any event making money with youtube and patreon is all about attention whoring and there's no reason to assume any basic standards of journalism were followed when the actual journalists can't be bothered to follow them most of the time. He could do something less overtly silly and clip the answer like crazy so chomsky appears to be saying it's great. The point is I don't know who this guy is, except that he's on the internet, managed to get an old famous guy to video conference with him (which I'm sure he was able to pull off via his e-status), and that he's wanting me to give him money.
e-mail with full headers as a reference would be getting closer to a citeable quote, although with enough effort that can be faked too... but then it still falls under the category of "why isn't this considered original research?". If the youtube guy just posted the fulltext + headers of the email from Chomsky on a free webhost and linked it here I doubt the citation would survive very long. How does youtube make it more valid? Someone famous saying things when asked in an e-mail doesn't really constitute a reference, at least on modern Wikipedia. Chomsky's area of expertise is grammar, and I mostly agree with the statement, but for all anybody knows he was sick of this guy by the time the interview was over and then gets an email from him asking a bunch of questions and answered them flippantly. Maybe he just didn't give a better answer because he knew this person and the target audience wouldn't understand what he was talking about. Maybe it's all completely valid. I guess my point here, if I have one, is that I'd actually trust a completely unsourced quote attributed to Chomsky than I would one where the only evidence is a screenshot in a youtube video... and now the next time I go to youtube, which rarely happens, I'm going to have 20 videos from this guy showing up in my feed because it was the last thing I looked at and they lack enough search data to try to pander to me otherwise... the reason I lean in the direction of everything being BS on youtube is that I've seen more blatantly incorrect information there in videos from people claiming to be experts on some topic than anywhere else on the net. Guy claiming credible guy said something in screens-hotted email doesn't pass my test for believable source in any case. On top of all of that, nobody really knows what something like GPT3 can teach about cognition and speech patterning because one of the biggest problems in AI research to the best of my knowledge is making the "AI" fully explain how and why it arrived at its output... and I could almost guarantee that Chomsky didn't sit down one afternoon with the non-public source code of this thing and a copy of its learned models and a full model of the neural net and figure out whether what he wrote there was true or just a gut reaction to something that seemed off. Again, I don't think any of what I just wrote is true or even particularly likely, but a screenshot from an hour in to something I don't want to watch is more likely to make me look elsewhere for information than it is to convince me of the validity of what someone said or didn't. I'm a cynic in that way but it's worked better for me than believing random youtubers would. A Shortfall Of Gravitas (talk) 10:38, 19 September 2021 (UTC)

Which language

For a language model I think it's crucial to mention in which language it can create human like text? I assume that model is a incapable for most of worlds languages, that should deserve a mention. — Preceding unsigned comment added by 2A03:EC00:B1A4:3D6B:DC72:35AF:3AF:2942 (talk) 18:01, 9 July 2021 (UTC)

Applications

What are the current commercial or academic applications of GPT-3? Or is it still in an "experimental" or "demonstration" phase? This will help us categorize it over in articles like artificial intelligence or applications of artificial intelligence. ---- CharlesGillingham (talk) 03:33, 1 October 2021 (UTC)

Removal of Copyright Caution

I'm trying to insert a concern regarding GPT's use of copyrighted material without permission, but it's always removed. The caution is as follows:

"GTP was built with data from the Common Crawl dataset, a conglomerate of copyrighted articles, internet posts, web pages, and books scraped from 60 million domains over a period of 12 years. TechCrunch reports this data includes copyrighted material from BBC, The New York Times, Reddit, the full text of online books, and more. [1]"

The response here at Wikipedia was "I am not sure how is the fact that GPT-3 is based on Common Crawl "controversial"; judging by the use of the word "copyrighted" and this edit, presumably the IP who added this thought that GPT-3's use of copyrighted text might present legal problems, but the TechCrunch article does not mention copyright at all, so this is original research. The article does, however, discuss GPT-3's bias."

TechCrunch does not need to mention copyrights for the concern to exist. Common Crawl consists of content that was scraped over a period of 12 years and distributed without permission. GPT uses the same content to fuel its generator and collect money. TechCrunch does not need to validate that fact for it to be a fact. But people who respect copyright laws need to be aware.

GPT's bias isn't relevant here.

Perhaps this caution would be more appropriate on OpenAI's wikipage?

It is not the function of Wikipedia to "caution" people about things. Wikipedia editors summarize what reliable sources say about topics. If reliable sources do not discuss this copyright concern, then it does not belong anywhere on Wikipedia. Please read No original research which is a core content policy. Cullen328 (talk) 18:43, 30 November 2021 (UTC)

Oh, I understand now. Thank you. I will add a non-original research reference. — Preceding unsigned comment added by 2600:1700:7261:5680:5920:90B5:F607:406B (talk) 19:35, 30 November 2021 (UTC)


On this note, I saw a sentence without a citation, which I removed: All GPT-based products therefore run the risk of plagiarism. This doesn't seem true to me, and I don't think it should be in the article unless some reliable source on intellectual property agrees with it -- it's not a priori true that training a neural network on copyrighted material means all of its output is subject to that same copyright. If it were, for example, me writing this sentence and posting it on a Wikipedia talk page under CC-BY-SA 3.0 would be illegal because I learned English by reading books that weren't public domain. jp×g 21:33, 5 June 2022 (UTC)

References

  1. ^ Here are a few ways GPT-3 can go wrong. TechCrunch.

Prompt samples

Since much of GPT-3's functionality is accessed via crafting prompts to elicit, I suggest a section of prompt examples analogous to code examples on the pages for various software languages. What do people think? —Pleasantville

@Pleasantville: I fail to understand what you mean. Are you saying that you think we should present prompt examples as a way of illustrating to readers what kind of input GPT-3 requires in order to be able to produce a model response? Bluerasberry (talk) 20:05, 23 April 2022 (UTC)

Size of model in bytes?

Despite reading assurances by the developers, I still find it difficult to not dismiss some of gpt-3’s output as mere stitched chunks of source material that just happens to be unknown to me. Knowing how many bytes of storage that the fully trained model uses and comparing that with ratios of the best lossless and lossy compression algorithms for the same corpus would be additionally convincing. Is that public? Is it a sensible question? 66.44.4.71 (talk) 10:33, 8 June 2022 (UTC)

I'm not sure if this makes sense to add to the article, but the answer to your question is that each model weight is 16 bits. You can multiply that by the size (in parameters) of a model to learn how much storage space it takes up. For example, a 175 billion parameter model is 350 Bytes. Stellaathena (talk) 03:42, 23 November 2023 (UTC)

Toxic language

The terms "toxic language" and "toxicity" are used in the section titled "Training and capabilities", but are not defined. At the minimum, I think there should be an inline definition of one or both terms, or possibly a new article. I found this explanation useful: https://towardsdatascience.com/toxicity-in-ai-text-generation-9e9d9646e68f Harris7 (talk) 10:56, 11 July 2022 (UTC)

Valid issue. But does OpenAI formally define toxic language in their LM? We can't cite how other LMs define toxic language in this article. — hako9 (talk) 07:17, 5 December 2022 (UTC)

Hello I'd like to translate this page in Italian

Hello I'd like to translate this page in Italian, currently it's not available as a language option. Can you help me in starting the process? thanks, Simo 71Simone (talk) 16:46, 26 August 2022 (UTC)

@71Simone: There is a "content translation tool" designed to support that kind of work, see this help page on the Italian Wikipedia: it:Aiuto:Traduzione_voci. Since the Italian Wikipedia indeed does not seem to have an article on GPT-3 yet (only a subsection on GPT-2), you would be creating a new entry with your translation.
PS: Since this is a general question about Wikipedia articles (rather than specific to this article's content), a forum like this one may be more suitable for followup questions. Regards, HaeB (talk) 19:21, 26 August 2022 (UTC)

Applications

Just wondering about the many apps that now exist using GPT-3 tech - the ones I know about are largely content generators for marketing/SEO. Jasper.ai, Copy.ai, and Rytr.ai are a few. OthersideAI also has an app that writes emails for you. I will add a line to reflect this, but interested to know how this list developed. LBirdy (talk) 19:54, 5 November 2022 (UTC)

Merger of various GPT article versions?

Please see discussion on Talk:OpenAI re: merger of GPT-4 article --> main OpenAI (or ChatGPT) article. Cheers, Cl3phact0 (talk) 11:31, 7 February 2023 (UTC)

Training Data Lacks Citation

The training data table and lead-in references https://towardsdatascience.com/will-gpt-3-kill-coding-630e4518c04d which is not a reputable source. — Preceding unsigned comment added by Kevznkatz (talkcontribs) 23:52, 6 March 2023 (UTC)

The table is taken almost directly from the paper. I added a cite to the table caption to make this clear. (I also updated the cite to be to the arxiv version rather than the nips version, since that seems to be what our existing cites were actually based off of.) Colin M (talk) 14:18, 19 March 2023 (UTC)

Imagic: Text-Based Real Image Editing with Diffusion Models

i would like to have full description of this research paper Eilum (talk) 07:20, 15 April 2023 (UTC)

Discussion on the addition of GPT-3.5 with Browsing (ALPHA) information

Hello fellow Wikipedia editors,

I have recently added information about the GPT-3.5 with Browsing (ALPHA) model to the article. This new model is an advancement in OpenAI's GPT series, featuring enhanced browsing capabilities and improved performance.

, I would appreciate any feedback, suggestions, or concerns you may have about the provided information on my addition. Specifically, I'd like to know:

  1. Is the content clear, concise, and easy to understand?
  2. Are the sources reliable and properly cited?
  3. Is there any additional information that should be included or removed to make the section more comprehensive and relevant?

Please share your thoughts, and let's work together so that I can ensure the accuracy and quality of the content.

Thank you for your collaboration.

Best regards,

Yabancii Yabancii (talk) 02:32, 27 April 2023 (UTC) 02:33, 27 April 2023 (UTC)

I improved it to sound less like and advertisement Chuck541 (talk) 15:05, 2 June 2023 (UTC)

Pricing and GPT-4

The article makes it sound like GPT-3.5 is not free, which is now, and it does not mention GPT-4 at all. I think the article needs to be updated and changed to include these. Chuck541 (talk) 15:04, 2 June 2023 (UTC)

Paraphrase

the progressive approaches uses longer texts and the texts are chosen because they are interesting in themselves 41.13.128.49 (talk) 13:20, 23 January 2024 (UTC)