Unicode


Outline

  1. Introduction
  2. What Unicode Includes
  3. Unicode Encodings
  4. Sources of Information
  5. Tools

Introduction

ASCII is by far the most commonly used character encoding because it suffices for normal English text and English has long been the dominant (natural) language used on computers. As other languages came into use on computers, other sets of characters, with different encodings, came into existence. Indeed, there is usually more than one encoding for a particular writing system. All in all, there are hundreds of different character encodings.

This proliferation of character encodings causes a lot of problems. If you receive a document from someone else, your software may not be able to display it, print it, or edit it. You may not even be able to tell what language or writing system it is in. And if you need to use multiple writing systems in the same document, matters become much worse. Life would be much simpler if there was a single, universal encoding that covered all of the characters in all of the writing systems in use.

Unicode is a character encoding standard developed by the Unicode Consortium to fulfill this need. It attempts to include in a single encoding, using a single sequence of numbers, all of the characters in all of the writing systems that anyone is likely to want to use. Some aspects of Unicode have come in for criticism, and there are some alternative proposals, but at least for now it is by far the most widely adopted universal encoding.


Back to top

What Unicode Includes

The current version of the Unicode standard contains almost all of the writing systems currently in use, plus a few extinct systems, such as Linear B. More writing systems will be added in the future. The following chart lists the character ranges that have thus far been defined. The name of the range links to the appropriate section of the Unicode standard on the Unicode web site. These are PDF files. The beginning of the range is a link to an HTML chart.

0000007FBasic Latin
008000FFC1 Controls and Latin-1 Supplement
0100017FLatin Extended-A
0180024FLatin Extended-B
025002AFIPA Extensions
02B002FFSpacing Modifier Letters
0300036FCombining Diacritical Marks
037003FFGreek/Coptic
040004FFCyrillic
0500052FCyrillic Supplement
0530058FArmenian
059005FFHebrew
060006FFArabic
0700074FSyriac
0750077FUndefined
078007BFThaana
07C008FFUndefined
0900097FDevanagari
098009FFBengali/Assamese
0A000A7FGurmukhi
0A800AFFGujarati
0B000B7FOriya
0B800BFFTamil
0C000C7FTelugu
0C800CFFKannada
0D000DFFMalayalam
0D800DFFSinhala
0E000E7FThai
0E800EFFLao
0F000FFFTibetan
1000109FMyanmar
10A010FFGeorgian
110011FFHangul Jamo
1200137FEthiopic
1380139FUndefined
13A013FFCherokee
1400167FUnified Canadian Aboriginal Syllabics
1680169FOgham
16A016FFRunic
1700171FTagalog
1720173FHanunoo
1740175FBuhid
1760177FTagbanwa
178017FFKhmer
180018AFMongolian
18B018FFUndefined
1900194FLimbu
1950197FTai Le
198019DFUndefined
19E019FFKhmer Symbols
1A001CFFUndefined
1D001D7FPhonetic Extensions
1D801DFFUndefined
1E001EFFLatin Extended Additional
1F001FFFGreek Extended
2000206FGeneral Punctuation
2070209FSuperscripts and Subscripts
20A020CFCurrency Symbols
20D020FFCombining Diacritical Marks for Symbols
2100214FLetterlike Symbols
2150218FNumber Forms
219021FFArrows
220022FFMathematical Operators
230023FFMiscellaneous Technical
2400243FControl Pictures
2440245FOptical Character Recognition
246024FFEnclosed Alphanumerics
2500257FBox Drawing
2580259FBlock Elements
25A025FFGeometric Shapes
260026FFMiscellaneous Symbols
270027BFDingbats
27C027EFMiscellaneous Mathematical Symbols-A
27F027FFSupplemental Arrows-A
280028FFBraille Patterns
2900297FSupplemental Arrows-B
298029FFMiscellaneous Mathematical Symbols-B
2A002AFFSupplemental Mathematical Operators
2B002BFFMiscellaneous Symbols and Arrows
2C002E7FUndefined
2E802EFFCJK Radicals Supplement
2F002FDFKangxi Radicals
2FE02EEFUndefined
2FF02FFFIdeographic Description Characters
3000303FCJK Symbols and Punctuation
3040309FHiragana
30A030FFKatakana
3100312FBopomofo
3130318FHangul Compatibility Jamo
3190319FKanbun (Kunten)
31A031BFBopomofo Extended
31C031EFUndefined
31F031FFKatakana Phonetic Extensions
320032FFEnclosed CJK Letters and Months
330033FFCJK Compatibility
34004DBFCJK Unified Ideographs Extension A
4DC04DFFYijing Hexagram Symbols
4E009FAFCJK Unified Ideographs
9FB09FFFUndefined
A000A48FYi Syllables
A490A4CFYi Radicals
A4D0ABFFUndefined
AC00D7AFHangul Syllables
D7B0D7FFUndefined
D800DBFFHigh Surrogate Area
DC00DFFFLow Surrogate Area
E000F8FFPrivate Use Area
F900FAFFCJK Compatibility Ideographs
FB00FB4FAlphabetic Presentation Forms
FB50FDFFArabic Presentation Forms-A
FE00FE0FVariation Selectors
FE10FE1FUndefined
FE20FE2FCombining Half Marks
FE30FE4FCJK Compatibility Forms
FE50FE6FSmall Form Variants
FE70FEFFArabic Presentation Forms-B
FF00FFEFHalfwidth and Fullwidth Forms
FFF0FFFFSpecials
100001007FLinear B Syllabary
10080100FFLinear B Ideograms
101001013FAegean Numbers
10140102FFUndefined
103001032FOld Italic
103301034FGothic
103801039FUgaritic
104001044FDeseret
104501047FShavian
10480104AFOsmanya
104B0107FFUndefined
108001083FCypriot Syllabary
108401CFFFUndefined
1D0001D0FFByzantine Musical Symbols
1D1001D1FFMusical Symbols
1D2001D2FFUndefined
1D3001D35FTai Xuan Jing Symbols
1D3601D3FFUndefined
1D4001D7FFMathematical Alphanumeric Symbols
1D8001FFFFUndefined
200002A6DFCJK Unified Ideographs Extension B
2A6E02F7FFUndefined
2F8002FA1FCJK Compatibility Ideographs Supplement
2FAB0DFFFFUnused
E0000E007FTags
E0080E00FFUnused
E0100E01EFVariation Selectors Supplement
E01F0EFFFFUnused
F0000FFFFDSupplementary Private Use Area-A
FFFFEFFFFFUnused
10000010FFFDSupplementary Private Use Area-B

Each block of 65,636 codepoints is referred to as a plane. Planes are numbered beginning at 0. Plane 0, codepoints 0x0000 through 0xFFFF, is known as the Basic Multilingual Plane or BMP because it contains the great majority of characters in current use for the world's languages.

Most of the Unicode ranges represent a single writing system. However, this is not always the case. In some cases Unicode lumps together several writing systems. For example, what it calls the Canadian Aboriginal Syllabics is not a single writing system. It is actually the union of the Cree writing system, the Inuktitut writing system, several variants used for languages such as Slave, Dogrib, and Dene Souline (Chipewyan), and the historically related but quite different Carrier writing system. Bengali and Assamese are combined since they differ only in the use of an additional character in Assamese and in the shapes of one letter. The Chinese characters used for Chinese, Japanese, Korean and Vietnamese are combined into a single set referred to as "CJK characters".

Languages written in a combination of writing systems, such as Japanese, which is typically written in a mixture of Chinese characters, hiragana, and katakana, will make use of multiple ranges. However, a language need not make use of multiple writing systems for it to draw characters from multiple Unicode ranges. A text in a language written in a non-Roman writing system will almost always contain characters from at least two ranges. This is because whitespace characters such as space and line feed, the arabic numbers, and European punctuation are widely used. These characters, which are included in the Basic Latin range, are not repeated in the other ranges. For example, here is a bit of what we would think of as pure Tamil text: இல்லையே, இது வரைக்கும் பேசவேயில்லையே. However, it actually contains several characters from the Basic Latin range because the spaces and punctuation are Basic Latin. Here is a listing of the character ranges in this text:

Here is a listing of the individual characters:

OffsetUTF-32Range and Name
00x00B87TAMIL LETTER I
10x00BB2TAMIL LETTER LA
20x00BCDTAMIL SIGN VIRAMA
30x00BB2TAMIL LETTER LA
40x00BC8TAMIL VOWEL SIGN AI
50x00BAFTAMIL LETTER YA
60x00BC7TAMIL VOWEL SIGN EE
70x0002CBASIC LATIN COMMA
80x00020BASIC LATIN SPACE
90x00B87TAMIL LETTER I
100x00BA4TAMIL LETTER TA
110x00BC1TAMIL VOWEL SIGN U
120x00020BASIC LATIN SPACE
130x00BB5TAMIL LETTER VA
140x00BB0TAMIL LETTER RA
150x00BC8TAMIL VOWEL SIGN AI
160x00B95TAMIL LETTER KA
170x00BCDTAMIL SIGN VIRAMA
180x00B95TAMIL LETTER KA
190x00BC1TAMIL VOWEL SIGN U
200x00BAETAMIL LETTER MA
210x00BCDTAMIL SIGN VIRAMA
220x00020BASIC LATIN SPACE
230x00BAATAMIL LETTER PA
240x00BC7TAMIL VOWEL SIGN EE
250x00B9ATAMIL LETTER CA
260x00BB5TAMIL LETTER VA
270x00BC7TAMIL VOWEL SIGN EE
280x00BAFTAMIL LETTER YA
290x00BBFTAMIL VOWEL SIGN I
300x00BB2TAMIL LETTER LA
310x00BCDTAMIL SIGN VIRAMA
320x00BB2TAMIL LETTER LA
330x00BC8TAMIL VOWEL SIGN AI
340x00BAFTAMIL LETTER YA
350x00BC7TAMIL VOWEL SIGN EE
360x0002EBASIC LATIN FULL STOP

Many languages written in extended versions of the Roman alphabet will also draw characters from several ranges. The Basic Latin range includes the basic twenty-six letters with no diacritics. If a language uses accents or other diacritics, or if it includes additional characters, it will draw characters from the Latin-1 Supplement or one of the two Latin Extended ranges. For example, Polish makes use of most of the ordinary Roman letters as well as characters such as Ł, which belongs to the Latin Extended-A range.

The Private Use Areas allow for the inclusion of non-standard characters in Unicode text. Any group of people can agree to use a certain encoding for a certain set of characters and exchange documents using them without fear of conflict between standard Unicode characters and their private character set. If a document contains characters in one of these ranges, one will not be able to display them or manipulate them intelligently unless one knows what they are. However, software processing such a document can simply be told to ignore characters in Private Use Areas.

One current use of the Private Use Areas is for writing systems that may well eventually be included in the Unicode standard but have not yet been included. For example, yudit supports the Hungarian runes and the Klingon alphabet, both encoded in the Private Use Area, Both of these writing systems may eventually be included in the standard. Another use for the Private Use Areas is for writing systems so obscure that they may never be included in the standard.


Back to top

Unicode Encodings

UTF-32

Unicode originally intended to use two bytes, that is, 16 bits, to represent each character. That would be sufficient for 65,536 characters. Although this may seem like a lot, it isn't really quite enough, so full Unicode makes use of 32 bits, that is, four eight-bit bytes. That's enough for 4,294,967,296 characters. In fact, although a 32 bit representation is used, the current standard actually calls for the use of only 21 bits - the high 11 bits are always 0. This provides for 2,097,152 characters, which should still be plenty. Text encoded in this version of Unicode is said to be in UTF-32.

UTF-16

When it was first realized that more than 65,536 characters might be needed, an attempt was made to expand the character space while keeping what was basically a two-byte encoding. The result was UTF-16. UTF-16 adds a complication: surrogate pairs. The ranges 0xD800-0xDBFF, the High Surrogate Area, and 0xDC00-0xDFFF, the Low Surrogate Area, do not directly represent characters. Instead, pairs of values, one a high surrogate, the other a low surrogate, together encode a character. The low ten bits of the high surrogate are concatenated with the low ten bits of the low surrogate, yielding a 20 bit number. Such surrogate pairs can encode 1,048,576 additional characters. UTF-16 can therefore encode a total of 65,536 -2048 + 1,048,576 or 1,112,064 characters. The characters in the BMP are represented by two bytes; characters outside the BMP are represented by four bytes.

UTF-8

One problem with UTF32 is that every character requires four bytes, that is, four times as much space as the ASCII characters and other single-byte encodings. In order to save space, a compressed form known as UTF-8 is usually used to store and exchange text. UTF-8 uses from one to four bytes to represent a character. It is cleverly arranged so that ASCII characters take up only one byte. Since the first 128 Unicode characters are the ASCII characters, in the same order, a UTF-8 file containing nothing but ASCII characters is identical to an ASCII file. Other characters take up more space, depending on how large the UTF-32 code is. Here are the encodings of some of the characters shown above. The 0x indicates that these are hexadecimal (base 16) values.

UTF-32UTF-8Name
0x000410x41Latin capital letter a
0x005700xD5 0xB0Armenian small letter ho (հ)
0x00BA40xE0 0xAE 0xA4Tamil letter ta (த)
0x04E090xE4 0xB8 0x89Chinese digit 3 (三)
0x100240xF0 0x90 0x80 0xA4Linear B qe (𐀤)

The first byte of the UTF-8 encoding of a character contains the information about how many additional bytes are used to encode it. If the high bit of the first byte is 0, the characteris an ASCII character and no additional bytes are used to encode it. If the high bit is 1, at least one additional byte is part of the encoding. The number of adjacent bits set starting with the high bit is the total number of bytes used to encode the character. For example, if the top three bits are 110, the character is encoded using two bytes. The first byte therefore consists of from zero to six 1s followed by a 0. The remaining bits can be either 1 or 0 and contribute to the encoding of the character.

The following chart shows how characters in different ranges are encoded in UTF-8. The letter n represents a bit that contributes directly to the encoding; it can be either 0 or 1.

UTF-32 CodeUTF-8 Encoding 
RangeByte 1Byte 2Byte 3Byte 4Byte 5Byte 6BitsSlots
00000000 - 0000007F0nnnnnnn     7128
00000080 - 000007FF110nnnnn10nnnnnn    112,048
00000800 - 0000FFFF1110nnnn10nnnnnn10nnnnnn   1665,536
00010000 - 001FFFFF11110nnn10nnnnnn10nnnnnn10nnnnnn  212,097,152
00200000 - 03FFFFFF111110nn10nnnnnn10nnnnnn10nnnnnn10nnnnnn 2667,108,864
04000000 - 7FFFFFFF1111110n10nnnnnn10nnnnnn10nnnnnn10nnnnnn10nnnnnn312,147,483,648

How this works is most easily seen by examining specific examples. Here are the bit patterns of the same characters as above. In the UTF-8 encoding a hyphen separates the bits that directly contribute to the encoding from the preceding bits.

NameUTF-32UTF-8
Latin capital letter a000000000000000000000000010000010-1000001
Armenian small letter ho00000000000000000000010101110000110-10101 10-110000
Tamil letter ta000000000000000000001011101001001110-0000 10-101110 10-100100
Chinese digit 3000000000000000001001110000010011110-0100 10-111000 10-001001
Linear B qe0000000000000001000000000010010011110-000 10-010000 10-000000 10-100100

For example, take Chinese digit 3, encoded as 1110-0100 10-111000 10-001001. Stripping off the bits at the beginning that are not directly part of the encoding, we obtain: 0100 111000 001001. Concatenating these and padding out to 32 places by adding 0s on the left, we obtain: 00000000000000000100111000001001, which is the UTF-32 encoding.

Although in principle we could encode 4,294,967,296 different characters in 32 bits, UTF-8 can only encode 2,216,757,376 characters in six bytes. This is unlikely to be a problem in practice. But if we really did need more than 2,216,757,376 characters, we could use a seventh byte, with the first byte set to 11111110. This would give us 36 useful bits, for an additional 68,719,476,736 slots, allowing us to encode a total of 70,936,234,112 characters. This is considerably more than can be represented in UTF-32.

Notice that it is not only the first byte in the UTF-8 encoding of a character whose upper bits play a special role. The top two bits of all non-initial bytes must be 10. This seems to be a waste, since it means that each additional byte only contributes six bits rather than eight. The reason for doing this is that it allows us to locate ourselves in a stream of UTF-8 encoded characters:

Let's consider what it would be like if we used an encoding scheme like UTF-8, in that we use the first byte of a sequence to tell us how many more bytes contribute to that character, but in which we don't mark continuation bytes by setting their high bits to 10. Since we don't need to distinguish between the leading sequences 10 and 11, we can also modify our rule for encoding the number of bytes in a character. The number of adjacent 1s at the high end of the first byte of a character will now give us the total number of additional bytes needed to complete the character. So if a byte has high bit 0, as before, that byte is a complete character in itself. If the high bits are 10, this will now mean that a total of two bytes are used. If the high bits are 110, this will now mean that a total of three bytes are used, and so forth.

Suppose that someone transmits the Japanese word くるま /kuruma/ "wheel". In UTF-32 this is encoded as 0x304F 0x308B 0x307B . In UTF-8, this becomes:

0xE30x810x8F0xE30x820x8B0xE30x810xBE
11100011 10000001 10001111 11100011 10000010 10001011 11100011 10000001 10111110

In our UTF-8-like system with no continuation marker, it will be encoded like this:

0xC0 0x30 0x4F 0xC0 0x30 0x8B 0xC0 0x30 0x7B
11000000 00110000 01001111 11000000 00110000 10001011 11000000 00110000 01111011

Now, suppose that in the course of transmission the third byte is lost. A program reading the UTF-8 input will detect an error because the first byte tells it the second and third bytes must have 10 as the two highest bits. As soon as it reads the third byte (that is, the original fourth byte), whose high two bits are 11, it knows that a byte is missing. Furthermore, it knows which character is damaged and can go on to read the next character. It knows that the byte it has just read (0xE3) begins a new character since its high bits are 11. The result will therefore be ?るま, where ? stands for the unknown, damaged character. In fact, no matter which byte is lost, the damaged character will be detected and the program will be able to resynchronize and locate the next character.

Suppose that the same error, loss of the third byte, happens when we are using our pseudo-UTF-8 system. We will have started off by reading the first byte, 0xC0, which will have told us that we need two more bytes to complete the character. Since the original third byte has been lost, these will be the original second and fourth, 0x30 and 0xC0. The fact that the high bits of the fourth byte are 11 does not signal an error in this system since a continuation byte is permitted to have this pattern. The first character will therefore be taken to be 0011000011000000 = 0x30C0, which is ダ (katakana /da/). The next byte is 0x30. Since its high bit is 0, no additional bytes are needed, and it will be taken to be the digit 0. The next byte is 0x8B. The leading 1 tells us that this is the first of a two byte sequence. We strip the two high bits of the first byte and concatenate the two, yielding: 00101111000000 = 0x0BC0. This is the Tamil diacritic for the vowel /i:/. The last two bytes each have a leading 0 and so stand alone. They are the digit 0 and {. No error is detected by the program, which instead of the intended くるま produces ダ0ீ0{. A human being reading this will of course recognize it as garbled. However, he or she will have no idea what may have been intended, whereas someone who understands Japanese may well be able to guess the missing character in ?るま. Furthermore, if the error can be detected by a computer program, it may be possible to correct it immediately, whereas a human being may not look at the text until much later.

One source of resistance to using UTF-8 in some countries is that it seems to privilege English and other languages that can be written using only the ASCII characters. English only takes one byte per character in UTF-8, while most of the languages of India, for instance, require three bytes per character. By the standards of today's computer processors, storage devices and transmission systems, text files are so small that it really doesn't matter, so I don't think that this is a practical concern. It's more a matter of pride and politics.

If we don't need the extinct writing systems and other fancy stuff outside of the Basic Multilingual Plane, we could all be equal and use UTF-16. English and some other languages would take twice as much space to represent, but other languages would take the same space that they do in UTF-8 or even take up less space. At least from the point of view of those of us who aren't English imperialists, this might not be a bad idea, if not for the fact that UTF-8 has another big advantage over UTF-16: UTF-8 is independent of endianness.

Whenever a number is represented by more than one byte, the question arises as to the order in which the bytes are arranged. If the most significant bits come first, that is, are stored at the lowest memory address or at the first location in the file, the representation is said to be big-endian. If the least significant bits come first, the representation is said to be little-endian.*.

There is a third arrangment that is historically important because it was used on the Digital Equipment PDP-11 series. In PDP-endian order, the most-significant byte is the second byte, the next most significant byte is the first byte, the third most significant byte is the fourth byte, and the least significant byte is the third byte. In other words, it is "big-endian" in the sense that the first two bytes are more significant than the second two bytes, but "little-endian" in the internal ordering of the two halves.

Consider the following sequence of four bytes. The first row shows the bit pattern. The second row shoes the interpretation of each byte separately as an unsigned integer.

bit pattern00001101000001101000000000000011
decimal value1361283


Here is how this four byte sequence is interpreted as an unsigned integer under the three ordering conventions.

Little-Endian (13 * 256 * 256 *256) +     (6 * 256 *256) + (128 * 256) +   3218,529,795
Big-Endian   (3 * 256 * 256 *256) + (128 * 256 *256) +     (6 * 256) + 1358,721,805
PDP-Endian(128* 256 * 256 *256) +     (3 * 256 *256) +   (13 * 256) +   62,147,683,590

These orderings are often described in terms of the sequence of bytes, from least significant to most significant, like this:

Little-Endian1234
Big-Endian4321
PDP-Endian3412


Most computers these days are little-endian since the Intel and AMD processors that most PCs use are little-endian. Digital Equipment machines from the VAX through the current Alpha series are also little-endian. On the other hand, most RISC-based processors, such as the SUN SPARC and the PowerPC, as well as the IBM 370 and Motorola 68000 series, are big-endian. A program that determines the byte order of the machine on which it is run can be downloaded here.

UTF-16 and UTF-32 are subject to endianness variation. If I write something in UTF-16 on a little-endian machine and you try to read it on a big-endian machine, it won't work. For example, suppose that I encode the Armenian character հ ho on a little-endian machine. The first byte will have the bit pattern 01110000, conventionally interpreted as 112. The second byte will have the bit pattern 00000101, conventionally interpreted as 5. That's because the UTF-32 code, 0x570 = 1392, is equal to (5 * 256) + 112. Remember, on a little-endian machine, the first byte is the least significant one. On a big-endian machine, this sequence of two bytes will be interpreted as (112 * 256) + 5 = 373 = 0x175, since the first byte, 112, is the most significant on a big-endian machine. Well, 0x175 isn't the same character as 0x570. It's ŵ (w with a circumflex). So, if you use UTF-16 you have to worry about byte order. UTF-8, on the other hand, is invariant under changes in endianness. That is a big enough advantage that most people will probably continue to prefer UTF-8.


Back to top

Sources of Information

The fullest information is found in the Unicode standard. This is available on the Unicode Consortium web site [http://www.unicode.org], in print form, and on CD. Two files that can be obtained from the web site or the CD are often useful. The file UnicodeData.txt contains details of most characters. It is a plain text file in which, for the most part, each line contains information about one character. Each such line contains a series of fields separated by semi-colons. The first field is the code, in hexadecimal; the second field is the name of the character. The other fields contain additional information of various sorts.

UnicodeData.txt is intended primarily to be read by machines. Another file, NamesList.txt, contains a subset of the information in UnicodeData.txt, omitting details primarily of use to computer programs, reformatted to be more readable by human beings. This is the best place to look for a character by name.

Both of these files omit character-by-character descriptions for the Chinese characters. This information is kept in a separate file, Unihan.txt [zip compressed version] since it is voluminous (25Mb uncompressed, 5Mb compressed) and not needed by many users. This file does not give simple descriptions of the Chinese characters comparable to those for other characters; for the most part, the information given consists of cross references to various reference works.


Back to top

Tools

A useful tool for dealing with Unicode is yudit, a Unicode text editor. If supplied with the appropriate fonts (sources for which are listed on the yudit website), yudit can display UTF-8 text. You can edit the displayed text, and you can enter text in several ways. By using a keymap you can type in a romanization and have the text appear in whatever writing system you choose. Numerous keymaps are supplied with yudit, but it is not difficult to write your own if necessary. If you know the code for the character you want to enter, you can enter it by its numerical code. yudit also recognizes Chinese characters drawn with the mouse. If you move the mouse over a character and left-click, yudit will display the corresponding character code.

Here is a screenshot of yudit displaying a sampling of writing sytems.

Yudit Unicode Editor Demo

Sometimes it is useful to find out about the content of a document for which you do not have the necessary fonts, which is in a writing system that you do not understand, where you want to look at characters that are not directly visible, or where you want information about exactly how the text is encoded. Two programs useful for such purposes are unidesc and uniname, both of which can be downloaded here. unidesc reports the character ranges to which different portions of the text belong. It can also identify Unicode encodings flagged by magic numbers. uniname prints the byte offset of each character, its hex code value, its UTF-8 encoding, and its name.

A convenient tool for converting from one Unicode encoding to another is uniconv, which comes with the yudit editor. uniconv can convert from one Unicode encoding to another, or between Unicode and various other encodings. In addition to a number of built-in encodings, uniconv can use keymaps created for use with yudit. For example, if you have a keymap that allows you to enter text into yudit in romanization and have it appear in a non-Roman writing system, uniconv will use the same keymap to convert text from that romanization to Unicode or another encoding. The GNU program iconv can also convert between Unicode encodings and between Unicode and numerous other character encodings.


Back to top

Revised 2004/01/19 03:02.