Talk:ASCII/Archive 3

Latest comment: 6 years ago by 192.31.106.36 in topic LEM
Archive 1Archive 2Archive 3Archive 4

Character set vs. Character encoding?

There is a definite difference between (finite) character sets and character encoding.

There is a sentence that currently reads:

Although ISO-8859-1 (Latin 1), its variant Windows-1252 (often mislabeled as ISO-8859-1), and the original 7-bit ASCII were the most common character encodings until the late 2000s, nowadays UTF-8 is becoming more common

But UTF-8 is not a single character set, but a Transformation Format that can represent both UCS-2 and UCS-4 sets. However, throughout the article, the ASCII character set is referred to as a character encoding, which doesn't quite sound right to me.

Should there be a consensus on the terminology to use? It may help with a lot of the confusion that goes with the subject.

ratarsed (talk) 09:46, 6 November 2012 (UTC)

As of 6.2/6.3, the Unicode standard says, in Chapter 1, "The Unicode Standard contains 1,114,112 code points, most of which are available for encoding of characters.", so there's only one character set - the first 65,536 constitute the Basic Multilingual Plane, but that's just a subset of Unicode.
For encodings, it says "Unicode characters are represented in one of three encoding forms: a 32-bit form (UTF-32), a 16-bit form (UTF-16), and an 8-bit form (UTF-8). The 8-bit, byte-oriented form, UTF-8, has been designed for ease of use with existing ASCII-based systems." shortly before that in Chapter 1.
In Chapter 2, it says, in 2.4 "Code Points and Characters":
On a computer, abstract characters are encoded internally as numbers. To create a complete character encoding, it is necessary to define the list of all characters to be encoded and to establish systematic rules for how the numbers represent the characters.
The range of integers used to code the abstract characters is called the codespace. A particular integer in this set is called a code point. When an abstract character is mapped or assigned to a particular code point in the codespace, it is then referred to as an encoded character.
In the Unicode Standard, the codespace consists of the integers from 0 to 10FFFF16, comprising 1,114,112 code points available for assigning the repertoire of abstract characters.
and, in 2.5 "Encoding Forms":
Computers handle numbers not simply as abstract mathematical objects, but as combinations of fixed-size units like bytes and 32-bit words. A character encoding model must take this fact into account when determining how to associate numbers with the characters.
Actual implementations in computer systems represent integers in specific code units of particular size—usually 8-bit (= byte), 16-bit, or 32-bit. In the Unicode character encoding model, precisely defined encoding forms specify how each integer (code point) for a Unicode character is to be expressed as a sequence of one or more code units. The Unicode Standard provides three distinct encoding forms for Unicode characters, using 8-bit, 16- bit, and 32-bit units. These are named UTF-8, UTF-16, and UTF-32, respectively. The “UTF” is a carryover from earlier terminology meaning Unicode (or UCS) Transformation Format. Each of these three encoding forms is an equally legitimate mechanism for representing Unicode characters; each has advantages in different environments.
All three encoding forms can be used to represent the full range of encoded characters in the Unicode Standard; they are thus fully interoperable for implementations that may choose different encoding forms for various reasons. Each of the three Unicode encoding forms can be efficiently transformed into either of the other two without any loss of data.
It also says:
The Unicode Consortium fully endorses the use of any of the three Unicode encoding forms as a conformant way of implementing the Unicode Standard. It is important not to fall into the trap of trying to distinguish “UTF-8 versus Unicode,” for example. UTF-8, UTF-16, and UTF-32 are all equally valid and conformant ways of implementing the encoded characters of the Unicode Standard.
My personal inclination is to refer to "UTF-8-encoded Unicode", e.g. a file can be "ASCII text", meaning that it is a sequence of ASCII code points (so if it's a sequence of octets, none of those octets have the 8th bit set), or it could be "ISO 8859-1 text", meaning that it's a sequence of ISO 8859-1 code points (so the octets with the 8th bit not set are ASCII characters), or it could be "UTF-8-encoded Unicode text", or it could be "UTF-16-encoded text" (which means it's either big-endian or little-endian, indicated either by a byte-order mark or some out-of-band indication such as "the person who sent it to me told me it's big-endian" or "this is Windows, so it's little-endian"), or it could be "UTF-32-encoded text".
A separate characteristic of those files is the subset of the character set they contain; an "ISO 8859-1 text" file could contain the ASCII subset (so if it's a stream of octets, no octet in the file has the 8th bit set), in which case it's also an "ASCII text" file. A Unicode file, regardless of encoding, could contain the Basic Multilingual Plane subset. Guy Harris (talk) 18:45, 22 April 2014 (UTC)
(Note, of course, that "UTF-8-encoded Unicode" is redundant; if it's encoding something other than Unicode, it's not UTF-8. The redundant phrase, however, may serve to remind people that "UTF-8" encodes the entirety of Unicode, so, unless somebody explicitly says "but this text must use only characters in the Basic Multilingual Plane" or "but this text must use only characters in ISO 8859-1" or even "but this text must use only ASCII characters", code processing that text must be prepared to see arbitrary Unicode characters. It may also serve to remind people that "is this UTF-8 or is this Unicode?" is an ill-formed question, perhaps based on the long-out-of-date assumption that UTC-2 is Unicode and that unless each character is represented by two octets it's "not really Unicode".) Guy Harris (talk) 19:38, 22 April 2014 (UTC)

Delete vs Backspace

As video terminals began to replace printing ones, the value of the "rubout" character was lost. DEC systems, for example, interpreted "Delete" to mean "remove the character before the cursor" and this interpretation also became common in Unix systems.

That would be because the DEC terminals where connected to Unix systems which were configured to understand the terminals. Some Unix systems used the 7f (DEL) character as the "interrupt" key, while others used the 03 (ETX) character for that purpose.

Most other systems used "Backspace" for that meaning and used "Delete" to mean "remove the character at the cursor". That latter interpretation is the most common now.

This seems to conflate the DEL character with the Delete key, which when used to mean "delete character at (under or forward of) the cursor" typically sends a sequence like 1b-5b-33-7e (ESC [ 3 ~), rather than the single character 7f (DEL).

Yes there are local applications such as Unix Xterm, or remote connectors such as PuTTY, SSH, & Telnet, but their choice between DEL & BS depends on the target service and/or local preference settings, so they don't sway the argument of which is "most common". Furthermore, some target services follow the EMACS tradition and use character 04 (EOT) for "delete character under cursor".

But most applications now do not receive a stream of characters at all; rather they receive events from the local windowing system (either directly, or from the browser within which they run).

Other changes have also occurred: the Return key has been renamed Enter or just ↲ on most keyboards, and it is treated as End-Of-Record or Next-Line rather than as a return on the same line.

Martin Kealey (talk) 01:56, 10 July 2014 (UTC)

More accurately:
The DEC terminals - and, before DEC made terminals, the non-DEC terminals such as the Teletype Model 33 - were connected to various DEC operating systems, which interpreted DEL as "delete previous character", and that eventually got adopted by UNIX systems as well. Originally, UNIX systems imitated Multics systems, and used # for "delete previous character" and @ for "delete previous line; the Multics systems at least had the excuse that they had to support non-ASCII terminals with wired-in local echo, such as the IBM 2741 and IBM 1050, so they couldn't do DEC-style tricks when echoing DEL; UNIX didn't have that problem, but they went with it anyway, and used DEL as the interrupt character.
The BSD folk decided that was bogus and implemented a more DEC-style tty interface, with the erase and kill characters echoing DEC-style (print the deleted characters between slashes/backslashes on printing terminals, erase them with backspace and space on display terminals), and with DEC-style choices of DEL for erase, ^U for line kill, and ^C for interrupt; that ended up becoming the most common tty interface on UN*Xes as well. The characters were settable on UN*X, so sometimes BS rather than DEL was used.
So it was DEC's operating systems, not UNIX, that gave us "DEL as what you type to delete the previous character".
And the notion that you type Return at the end of the line, even if it sends CR, is also a DECism; no DEC OS I remember required that you type both CR and LF at the end of a line. At least with the older OSes, the CR would be echoed as CR LF, and would show up as CR LF as input. (RSX-11 and VMS were a bit weird here, in that they treated FORTRAN line format as the proper text format; I think that typing CR ended the line, but echoed only as CR, and the next line output to the terminal would begin with LF and end with CR as sent to the terminal, because it would normally have SP as the initial FORTRAN control character. But I digress....)
UNIX followed in the Multics "LF by itself, with no CR, at the end of a line" tradition; typing CR would end the line, cause CR LF to be echoed, and cause just an LF to appear in the input stream. Guy Harris (talk) 02:30, 10 July 2014 (UTC)
I've made some edits to clarify that this is a software interpretation of input characters, and to get rid of the video terminal stuff entirely, as well as to ask for citations about the BS-vs-DEL claims. Guy Harris (talk) 03:10, 10 July 2014 (UTC)

Second representation of the printable character list

I've added a previously removed second representation of the ASCII characters that supports easy copy-pasting. I wasn't aware that it already has been on the page. My edit got reverted. However, I think it should be still there.

In Wikipedia, there are lots of examples where information is displayed multiple times, even when we don't count efforts to help disabled people, like "spoken wikipedia" or descriptions below images: take AES as an example. The text perfectly describes the steps but the images display a second representation of those steps (and a third is in the image description, but as previously described we don't count it).

We can see the redundancy also on the principle of the lead paragraph: Except of the X in "is a X", most information gets repeated in the article below. It gives the reader a concise definition of the topic, that can be retrieved without having to read the whole article. The second representation of the ascii list fulfills this second purpose: the reader doesn't have to read every single character to get a full list of ASCII characters.

And, redundancy is still present in the list itself:

Binary Oct Dec Hex Glyph
010 0000 040 32 20 (space)

The list gives us three representations of the the character's number.

I think there are different use cases linked to both representations: first (currently only) representation helps readers with various conversions between the character and its ASCII address. Therefore the multiple number formats. The second (disputed) representation helps the reader in the case they want to act on the whole set of characters: I've used it for password generation, and others might want to use it in a program the case they write in a language that doesn't have such an easy linking between numbers and characters like C.

What do you think? — Preceding unsigned comment added by ‎Muelleum (talkcontribs) 21:04, 12 August 2014 (UTC)

Hello there! Hm, so the main purpose would be to make the whole ASCII set of characters easily available for copying and pasting? Maybe some kind of a compromise could be to provide it in form of a note after "There are 95 printable characters in total", using the {{Efn}} template? — Dsimic (talk | contribs) 21:54, 12 August 2014 (UTC)
I'm OK with that. Muelleum (talk) 23:24, 12 August 2014 (UTC)
Looking good, having a scrollable box was the only solution for long lines in reference tooltips. — Dsimic (talk | contribs) 23:32, 12 August 2014 (UTC)

In the Order section: Numbers are sorted naïvely as strings?

The article in the Order section talks about "ASCIIbetical order". The following quote appears in the text:

"Numbers are sorted naïvely as strings; for example, "10" precedes "2""

Is naïvely the right word? Natively maybe?

I am researching one proposed TerSCII table and curious as to why ASCIIbetical order is what it is. There is a lot of thinking went into how the ASCII table is built. It would be nice to carry over lessons learned from ASCII to TerSCII.

2606:6000:6042:9600:25F2:8029:9F5C:52C9 (talk) 19:46, 4 June 2015 (UTC)Wilx

No, "naïvely" is what is intended - if you just, well, *naïvely* assume that all strings should be sorted the same way, you end up with "10" being less than "2", as "1" is < "2". ("Naïvely", as in "showing a lack of experience, wisdom, or judgement", i.e. not realizing that if you sort numbers as strings they won't come out in numerical order.)
That whole section doesn't really explain what it's talking about; what it's really discussing is sorting strings by simply comparing individual characters' code values, without paying any attention to getting numbers sorted by numerical value, words sorted without regard to case, etc.. That's less a characteristic of ASCII than of simple (naïve) string comparison operations. You could have EBCDICibetical order as well, for example. Guy Harris (talk) 22:47, 4 June 2015 (UTC)
In particular, you are extremely unlikely to find a character encoding scheme that would magically make naïve string comparison magically sort strings the way humans would want them sorted, if you're going to compare ternary strings by comparing the numerical values of the characters in the string from beginning to end. Having the encodings for upper-case and lower-case letters be adjacent, so that 'A' < 'a' < 'B' < 'b' < 'C' < 'c' etc., and putting accented letters in the appropriate places might help, but that's not all there is to sorting words, and that won't fix the problem of sorting numbers, either. Localization of sorting may end up being about as painful in ternaryland as in binaryland.... Guy Harris (talk) 22:57, 4 June 2015 (UTC)

A three-part article on ASCII

Sometime in the 1980's, I read an article in three successive issues of a personal (micro) computer magazine by the "inventor of ASCII", whoever that was, that went over all the non-alphabetic codes and was very enlightening. I've never been able to find it again. If anybody knows, please send me a message. Thanks.deisenbe (talk) 16:08, 14 August 2014 (UTC)

You are probably looking for "Inside ASCII". This was originally published as:
  • Bemer, R. W. (May 1978). "Inside ASCII - Part I". Interface Age. 3 (5). Portland, OR: Dilithium Press: 96–102.
  • Bemer, R. W. (June 1978). "Inside ASCII - Part II". Interface Age. 3 (6). Portland, OR: Dilithium Press: 64–74.
  • Bemer, R. W. (July 1978). "Inside ASCII - Part III". Interface Age. 3 (7). Portland, OR: Dilithium Press: 80–87.
Unfortunately it is almost impossible to find copies of Interface Age anymore. I am not sure how available this is either but it was also republished as:
  • Bemer, R. W. (1980). "Inside ASCII". General Purpose Software. Best of Interface Age. Vol. 2. Portland, OR: Dilithium Press. Chapter 1. ISBN 0-918398-37-1.
Bob Bemer wrote many other things on ASCII as well. Perhaps you can find one of these:
I hope that was helpful. 50.126.125.240 (talk) 00:56, 3 January 2016 (UTC)
That must be it, though the magazine doesn't ring a bell. Thank you. I met the author, and had a discussion of what he called the "Data-Link Escape" code. This must have been at the big microcomputer expo in Los Angeles in the spring of 1980. deisenbe (talk) 21:06, 3 January 2016 (UTC)

"No I've got sandwiches"

In the article under 7-bit the following is given to illustrate national variant problems:

Many programmers kept their computers on US-ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar." as the answer, which should be "Nä jag har smörgåsar." meaning "No I've got sandwiches."

If "many" programmers used US-ASCII then the incidence of a programmer emailing another programmer would have some probability ((many/all)2) of both being US-ASCII. That indefinite probability added to the probability of both using Swedish encoding adds to half at the least, so the majority of emails sent from swedish programmers to swedish programmers should have ended up being displayed as they appeared on the sender's computer.

All I mean to say is that maybe one of the hypothetical people in the example shouldn't be a programmer. But I'm not going to change it because it doesn't make much difference anyway. Mattman00000 (talk) 06:53, 18 February 2016 (UTC)

Even if two programmers got the same encoding, and if it is US-ASCII, they can't write "smörgåsar", since ö and å isn't included in US-ASCII. The reason they used US-ASCII could be that braces or brackets were needed for programming languages or scripts. It is my personal experience as a programmer student that Swedish with {|}[\] instead of åäöÅÄÖ were used.--BIL (talk) 22:57, 30 August 2016 (UTC)

Why are some characters boxed on the ASCII code chart?

I can't understand, why are some characters boxed on the ASCII code chart... can someone help, please? Erikkonstas (talk) 09:42, 30 August 2016 (UTC)

There are several charts/tables on the page; which one are you referring to? And by "boxed" do you just mean "the character has a box drawn around it"? Guy Harris (talk) 16:52, 30 August 2016 (UTC)
The slightly darker shaded colors and boxes around some characters in the ASCII chart indicate that historically there was more than one definition for these codepoints. The slightly darker color indicates a non-conflictive extension/revision (f.e. the renaming of some control codes without changing their actual meaning, or the addition of some other control codes or low-case letters elsewhere). The boxes indicate that the meaning of these code points changed in more drastic ways (this includes some control codes, but also a number of special symbols, some of which were reserved for "national use" or where the glyphs officially had some "double-meaning". The details should become obvious either from discussions elsewhere in the article or from studying the different revisions of the actual ASCII standard, but I agree, that this isn't exactly self-explanatory and we should add some extra footnotes to the chart at some later stage. (I probably will do somewhen in the future, but first I wanted to research the development history of ASCII some more.)
--Matthiaspaul (talk) 11:54, 17 October 2016 (UTC)

Can someone please correct the Pronunciation of ASCII?

There is something amiss. Current pronunciation is listed as "æski" with a respell of "ASS-kee" citing (Mackenzie 1980). In my common practice, it is not pronounced with a "ASS", but rather an "AZ" like the word "As". Those pronouncing it was a defined "ASS" are being juvenile. Say it to yourself and bring emphasis to the secondary S of "ASS-kee".

Second, the cited reference of (Mackenzie 1980) provides "ass-key", so "ASS-kee" is pulled from somewhere else not referenced.

Finally, several other websites provide all range of pronunciations, but none I could find support "ASS-kee". [1] gives "as-kee". [2] gives "/ˈæski/". [3] gives "/ˈaski/".

I propose it is changed to "/ˈæski/" with respell "as-kee". At a minimum, revert to "ass-key" to accurately reflect the quoted reference. Thoughts? — Preceding unsigned comment added by 204.16.25.237 (talk) 17:23, 11 May 2017 (UTC)

The Wiktionary entry for "as" shows the pronunciation, in IPA, as "æz", not "æs", and the Merriam-Webstr entry for "as" also shows the final consonant as voiced, rather than voiceless. I've never heard "ASCII" pronounced with a voiced sibilant.
I presume the capitalization of "ASS" is to indicate that it's the stressed syllable; I couldn't find anything immediately obvious in the Wikipedia documentation/style guide to indicate that it's the recommended way to do so. Guy Harris (talk) 20:24, 12 May 2017 (UTC)

Hello fellow Wikipedians,

I have just modified 6 external links on ASCII. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

 Y An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 05:27, 24 June 2017 (UTC)

Some just moved; the others work. Guy Harris (talk) 20:25, 24 June 2017 (UTC)

Un-broken date

On its page, date has reverted to 1963. (due to RASCII) Rowan03 (talk) 21:03, 13 July 2017 (UTC)

Yes, given that nobody has provided any evidence that anything having to do with the American Standard Code for Information Interchange happened prior to 1960, the date was fixed to 1963, the date when the standard was published. Just because Andy Dingley thinks ASCII started in 1957, that doesn't mean it actually did start then; he gives no citation for that claim. Furthermore, the date that the project was started, whether it's 1957 or 1960, isn't the date of introduction; the date the standard was published was. Guy Harris (talk) 21:58, 13 July 2017 (UTC)

Where should this go

I have removed the following from the overview section, as it is definitely not of general interest. But where, if anywhere, does it belong?

A June 1992 RFC[1] and the Internet Assigned Numbers Authority registry of character sets[2] recognize the following case-insensitive aliases for ASCII as suitable for use on the Internet: ANSI_X3.4-1968 [sic] (canonical name), iso-ir-6, ANSI_X3.4-1986, ISO_646.irv:1991, ASCII, ISO646-US, US-ASCII (preferred MIME name),[2] us, IBM367, cp367, and csASCII.
Of these, the IANA encourages use of the name "US-ASCII" for Internet uses of ASCII (even if it is a redundant acronym, but the US is needed because of regular confusion of the ASCII term with other 8 bit based character encoding schemes such as extended ASCII or UTF-8 for example). One often finds this in the optional "charset" parameter in the Content-Type header of some MIME messages, in the equivalent "meta" element of some HTML documents, and in the encoding declaration part of the prologue of some XML documents.

Please feel free to reinsert at an appropriate location. Clean Copytalk 10:17, 11 August 2017 (UTC)

References

  1. ^ Simonsen, Keld Jørn (June 1992), Character Mnemonics & Character Sets, Internet Engineering Task Force (IETF), RFC 1345, archived from the original on 2016-06-13, retrieved 2016-06-13 {{citation}}: Unknown parameter |dead-url= ignored (|url-status= suggested) (help)
  2. ^ a b Internet Assigned Numbers Authority (IANA) (May 14, 2007). "Character Sets". Accessed 2008-04-14.

Control-Z as end-of-file

First for TOPS-10. The use of Control-Z as End-Of-File existed but only from the Teletype. Control-Z on paper-tape, mag-tape, disk-files was just another character. In other words, this was specific to the terminal device driver. I don't remember if there was a standard escape mechanism for the various control characters - other than the program using a raw mode.

Second, also for TOPS-10, disk-files had a count of words. Not a count of characters and not a count of records. So plain text files could have 0 to 4 NULs at the end to finish the last word. The input routine ignored all NULs coming in - also due to sequence numbered files being word aligned for every line - see SOS and PIP.

Third, as far as CP/M goes, the original use of Control-Z was as a filler character since the OS only did a count of (128 byte) records. So a character file would have 0 to 127 SUB at the end to fill out the last record. (Why they used SUB instead of NUL like DEC baffles me.)

This was done so that simple code that read from either the terminal or a disk file would quit at the same point using the same code for both the terminal and disk file (ie when it hit a ^Z in the file or when the user typed ^Z). NUL would not work because of the need to ignore nuls due to paper tape input, and also that it was not possible to type a NUL on many terminals.Spitzak (talk) 22:23, 9 October 2017 (UTC)

Then, common usage changed this to merge TOPS-10's TTY end-of-file and the filler idea to have a Control-Z as an explicit 'end' character.

I have some TOPS-10 and CP/M manuals. I can do some better research if desired. — Preceding unsigned comment added by Wmdgus73 (talkcontribs) 01:17, 3 January 2016 (UTC)

OK, I got rid of the mention of TOPS-10, as a quick look at an online TOPS-10 manual did, indeed, suggest that the padding was with NUL. Guy Harris (talk) 05:43, 7 October 2017 (UTC)
And it now mentions the from-the-terminal interpretation of ^Z on the PDP-6 monitor and TOPS-10. Guy Harris (talk) 23:20, 8 October 2017 (UTC)

Some browsers may not display Control Pictures properly?

"The Unicode characters from the area U+2400 to U+2421 reserved for representing control characters when it is necessary to print or display them rather than have them perform their intended function. Some browsers may not display these properly."

Are there examples of browsers not displaying control picture characters properly? I've used a large number of browsers on various platforms for a long time, and these characters have been supported by all of them for something like 14-20 years. Maybe simple character-mode browsers have trouble, but surely users of those are well aware that their clients can't expect full parity.

Brianary (talk) 16:15, 26 October 2017 (UTC)

Linux/Unix has a simple man page for ASCII

http://man7.org/linux/man-pages/man7/ascii.7.htmlSbmeirowTalk04:53, 7 October 2017 (UTC)

Yes, it's in most if not all Unix-like systems, but I'm not convinced it's worthy of note in this article. Guy Harris (talk) 05:16, 7 October 2017 (UTC)
I don't know either. But if it would, I think it would also be worth mentioning that the DR-DOS DEBUG tool has a special command named ASCII [value] producing an ASCII table (with optional coordinate markers (*) for value):
C:\>DEBUG
-??                  ; (invoke built-in extended help system)
[…]                  ; (omitting 8 pages on other commands here)
                   --- Utility commands ---
ASCII [value]        + Display an ASCII table
CLS                  + Clear screen
CPU                  + Get CPU type and operating mode
H value              + Display 'value' as hex, dec, char, octal, and binary
H value1 value2      + Display results of ADD, SUB, MUL, DIV, and MOD
V                    + Show user screen
; comment            + Comment line
[…]
-ASCII 5C            ; (invoke character map)
                         *
             0123456789ABCDEF        0123456789ABCDEF
           0 ................      8 ÇüéâäàåçêëèïîìÄÅ
           1 ................      9 ÉæÆôöòûùÿÖÜø£Ø׃
           2  !"#$%&'()*+,-./      A áíóúñѪº¿®¬½¼¡«»
           3 0123456789:;<=>?      B ░▒▓│┤ÁÂÀ©╣║╗╝¢¥┐
           4 @ABCDEFGHIJKLMNO      C └┴┬├─┼ãÃ╚╔╩╦╠═╬¤
          *5 PQRSTUVWXYZ[\]^_      D ðÐÊËÈıÍÎÏ┘┌█▄¦Ì▀
           6 `abcdefghijklmno      E ÓßÔÒõÕµþÞÚÛÙýݯ´
           7 pqrstuvwxyz{|}~⌂      F ­±‗¾¶§÷¸°¨·¹³²■ 
-Q                   ; (quit DEBUG)
C:\>
Also, the built-in help system in 4DOS has a few pages on ASCII as well (similar, but more comprehensive than the Linux man page), invokable by typing ASCII at the prompt followed by pressing the F1 key.
--Matthiaspaul (talk) 01:20, 14 October 2017 (UTC)
That output is not ASCII, it is some extended ASCII, so it should not be referred to by this page. The Unix man command does limit itself to ASCII, but it really is not very important either.Spitzak (talk) 17:58, 26 October 2017 (UTC)

Hello fellow Wikipedians,

I have just modified 3 external links on ASCII. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

 Y An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 09:34, 11 January 2018 (UTC)

The first of those doesn't appear to be broken; I changed "dead-url" back to "no".
The second of those is still at the commented-out URL; I put that one back.
The third of those appears, from the title given to it, to be a duplicate of the reference just before it - there's no way to check, unless it's moved somewhere else not obvious, so I removed it.
The fourth of those works. Guy Harris (talk) 10:24, 11 January 2018 (UTC)

Character Set

The subsection Character set has an image of a table with eight rows and 16 columns. Each cell has a thin black border, except for 27 which have a heavy (thick) grey border. Why the thick grey border? User:Nwbeeson — Preceding unsigned comment added by 96.60.34.200 (talk) 18:50, 18 April 2018 (UTC)

I'm not seeing that; I'm seeing several cells with thick grey borders.
It looks as if the intent is to give codes whose meaning changed between the 1963 and 1967 versions of ASCII the thick grey border; see the charts above. It isn't entirely consistent; 64, which changed from @ to ` in 1965 and back to @ in 1967, isn't given thick grey borders, but 92, which went from \ to ~ and back to \, is. Codes that weren't assigned glyphs or control functions in 1963 (such as lower-case letters) aren't given thick grey borders.
This needs to be explained in the text before the table. Guy Harris (talk) 19:03, 18 April 2018 (UTC)

ZX Spectrum chr$ 96

Sinclair ZX Spectrum uses chr$ 96 as POUND-sign 85.149.83.125 (talk) —Preceding undated comment added 15:44, 18 June 2018 (UTC)

Yes - the ZX Spectrum character set page says

It is based on ASCII-1967 but the characters ^, ` and DEL are replaced with ↑, £ and ©. It also differs in its use of the C0 control codes other than the common BS and CR, and it makes use of the 128 high-bit characters beyond the ASCII range.

so perhaps ASCII#8-bit codes should not say "the more common ASCII-1967, such as found on the ZX Spectrum computer", but instead say the ZX Spectrum character set was a modified version of ASCII-1967 (it's not based on ISO 646, because DEL is part of ISO 646 but the Spectrum code uses it as a printable character). Guy Harris (talk) 19:01, 18 June 2018 (UTC)

Discussion on ASCII table

I see you reverted it back to hideous. Well I tried. Would like to explain some of my reasons to change this:

Good faith, but I very much prefer the old table:

Standard layout (used almost everywhere else as well),

I was intending to change all the other instances, after trying this out, so this one would be "used almost everywhere else as well". This is already used for every Unicode code page entry so it may actually be more-used even though it is in the end less visible.

color grouping

One of my primary goals was to eliminate this bullshit. It is wrong in every other table for any non-ASCII character, and does not convey any usable information (I think everybody knows the difference between digits and letters), and destroys the ability to use colors and legends to indicate more useful information such as variances!

indication of variances

Fully intended to support this with colors, though in this case I thought it quite redundant with the information in the above tables so I left it out. Also I think the variance indicators should be reserved for character sets that a user may actually have a tiny chance of encountering, pre-67 ASCII just does not exist anywhere in the world.

more and directly readable codes

The unicode code points are consistently wrong in other tables (due to well-meaning editors changing them to the code point), so putting them in the tooltip with a clear U+ prefix and name would help a lot. Text also allows non-Unicode characters to be described.

(no tooltips, which don't show at all over here, and would require a mouse anyway))

Yes there is little if anything that can be done. If you want more information about each character the only possibility is to make a big vertical table, one character per line. If that is what you think should be done, do that instead (the ASCII page already has such a table).

Please send some constructive criticism, or state that in no way will you consider the removal of information from the boxes an improvement. Spitzak (talk) 17:29, 17 July 2018 (UTC)

Answer see below.
--Matthiaspaul (talk) 18:46, 30 September 2018 (UTC)

LEM

The table of control characters shows LEM as an alias for ETB, but nowhere is there any discussion of what LEM is. I've checked the talk archives, and nada there as well. Nor is there anything at ETB. There is an expansion of the initialism as "Logical End of Media" available from http://ascii-table.com, but no further explanation; it would appear to be intended to signify the data end of a tape, whether paper or magnetic, rather than the physical end. 192.31.106.36 (talk) 11:49, 14 October 2018 (UTC)