A python API for evaluating coverage of glyph sets in font projects.
Project description
Google Fonts Glyphset Definitions
This repository contains curated glyphsets that Google Fonts hands out to designers of commissioned fonts.
[!NOTE]
If you are a user and you want to merely get your hands on ready-made glyphsets, pick your files straight out of the/data/results
folder, such as.glyphs
files with empty placeholder glyphs, or.plist
files that are so-called Custom Filters that will show up in the Glyphs.app sidebar when placed alongside your source files. Alternatively, you can cook your own Custom Filters with theglyphsets
tool, see the Glyphsets Tool section at the bottom of this document.The rest of this README is addressing people who are editing glyphset and language definitions.
The repository recently (end of 2023/start of 2024) underwent a bigger overhaul in how the glyphsets are assembled.
The current approach has become part of a bigger network of tools that is also comprised of gflanguages and shaperglot, as well as fontbakery’s shape_languages
check.
In the ideal scenario, glyphsets are defined merely by lists of language codes (such as tu_Latn
).
During the build process (sh build.sh
), the gflanguages
database will be queried for all characters defined for those languages, then combined into a single glyphset.
Optionally, encoded characters as well as unencoded glyphs may be defined in glyphset-specific or language-specific files here in gfglyphsets
, whose contents will also be added to the final glyphsets.
Later during font QA (as part of font onboarding work, just FYI), Fontbakery's shape_languages
check first determines which glyphsets a font supports, then uses the languages defined for each glyphset to invoke shaperglot
, which checks whether each language shapes correctly or not.
This presents quite a leap forward in font QA where shaperglot
invokes the harfbuzz
shaping engine to prove the entire OpenType-stack to be funtioning at once, including mark attachment and character sequences.
shaperglot
contains its own sets of script- or language-specific definitions, such as a check to see whether the ı
and i
shape into distinct letters in small-caps for Turkish.
[!NOTE]
See GLYPHSETS.md for an up-to-date description of the state of the new glyphset definitions. Many glyphsets have not been transitioned to the new approach and still exist as manually curated lists of characters and unencoded glyphs.
How to assemble glyphsets
Prerequisites
In order for the build command to correctly assemble glyphsets using language defintions, make sure that your work environment sports the latest version of gflanguages. If unsure, update it with pip install -U gflanguages
.
Oftentimes you may want to adjust language definitions in gflanguages
at the same time as you’re adjusting other parts of the glyphsets. In this case you may clone the gflanguages
repository to your computer and install it using pip install -e .
from within its root folder. This will expose your gflanguages
clone to your entire system (or virtual environment) and changes in gflanguages
will automatically be reflected in other tools that use it, such as gfglyphsets
, without the need of re-installing it after every code or data change. Thus, running sh build.sh
will automatically use your latest language definitions, even before you have PR’d your language definition changes back to the repository.
Where are glyphsets defined?
Inside this repository, data is defined in two different places.
One place is inside the glyphsets
Python package (/Lib/glyphsets/definitions
). This data that needs to be exposed to third-party tools such as fontbakery
.
The other place is in /data/definitions
. This data is only used for authoring glyphsets and need not be distributed as part of the Python package.
-
Inside Python package: Glyphsets are defined in
.yaml
files inside the Python package folder at/Lib/glyphsets/definitions
. -
Outside of Python package: Additional files in the
/data/definitions
sub-folders will become part of the glyphsets as soon as they are found to exist under a certain filename. If a file that you need doesn't exist there, create it in its place.
Where are characters and glyphs defined?
In order to determine where characters (encoded with a Unicode) or glyphs (unencoded) are defined, follow this logic:
- Is it a language-specific encoded character? Then it goes into the
gflanguages
database (which is a separate package) for example here.gflanguages
holds only encoded characters, not unencoded glyphs. Prepare a separate PR forgflanguages
if you are changing those definitions as well. - Is it a language-specific unencoded glyph? Then it goes into
/data/definitions/per_language
- Is it a more general glyphset-specific character or glyph? Then it goes into
/data/definitions/per_glyphset
If you find that you need additional separate definitions per script, contact @yanone to implement it.
(Re-) Building glyphsets
Once your language and glyphset definitions are set up and edited, run sh build.sh
from the command line. This command sources characters from gflanguages
as well as characters and glyphs from the various files in the /data/definitions
folder, and combines them into one comprehensive list per glyphset, which are then rendered out into various different data formats into the /data/results
folder.
Additionally, the GLYPHSETS.md document is updated, which contains a human-readable overview of the state of each glyphset.
Data flow visualization
Here’s a visual overview of the data definitions that go into each glyphset, and the files that are created as results.
Read this top to bottom.
DEFINITIONS:
┌──────────────────┐
│ Language codes │
│ "en_Latn" │
│ "de_Latn" │
│ ... │
└──────────────────┘
│
┌──────────────────┐ ┌──────────────────┐
│ gflanguages │ │ .stub.glyphs │
│ (Python package) │ │ (optional) │
└──────────────────┘ └──────────────────┘
│ │
╰──────────────────────┬──────────────────────╯
│
BUILD PROCESS: │
│
╔═══════════════════════════════╗
║ complete glyphset ║
╚═══════════════════════════════╝
│
RESULTS: │
│
╭──────────────────────┼──────────────────────┬──────────────────────╮
│ │ │ │
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ .txt │ │ .nam │ │ .glyphs │ │ .plist │
│ (nice & prod) │ │ │ │ │ │ │
└──────────────────┘ └──────────────────┘ └──────────────────┘ └──────────────────┘
Glyphsets Tool
[!NOTE]
Previously existing commands of theglyphsets
tool are currently deactivated after the transition to the new database. These are:update-srcs
,nam-file
,missing-in-font
. Please report if you need to use these.
Custom Filters
You can create your own Glyphs.app Custom Filters using the glyphsets
tool.
Install or update the tool with pip, if you haven’t already:
pip install -U glyphsets
Create a filter list for Glyph.app:
glyphsets filter-list -o myfilter.plist GF_Latin_Core GF_Latin_Plus
Add this .plist
file next to your Glyphs file and (after restart) you would be able to see it in the filters sidebar.
Compare Glyphsets
You can compare the contents of two or more glyphsets against each other. Each consecutive glyphset will be compared to the previous one.
This command lists the complete contents of GF_Latin_Kernel
first, and then lists only extra (or missing) glyphs for GF_Latin_Core
when compared to GF_Latin_Kernel
:
glyphsets compare GF_Latin_Kernel GF_Latin_Core
Output:
GF_Latin_Kernel:
===============
Total glyphs: 116
Letter (52 glyphs):
`A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z`
...
GF_Latin_Core:
=============
Total glyphs: 324
GF_Latin_Core has 208 **extra** glyphs compared to GF_Latin_Kernel:
Letter (168 glyphs):
`ª º À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö Ø Ù Ú Û Ü Ý Þ ß à á â ã ä å æ ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ø ù ú û ü ý þ ÿ Ā ā Ă ă Ą ą Ć ć Ċ ċ Č č Ď ď Đ đ Ē ē Ė ė Ę ę Ě ě Ğ ğ Ġ ġ Ģ ģ Ħ ħ Ī ī Į į İ ı Ķ ķ Ĺ ĺ Ļ ļ Ľ ľ Ł ł Ń ń Ņ ņ Ň ň Ő ő Œ œ Ŕ ŕ Ř ř Ś ś Ş ş Š š Ť ť Ū ū Ů ů Ű ű Ų ų Ŵ ŵ Ŷ ŷ Ÿ Ź ź Ż ż Ž ž Ș ș Ț ț ȷ Ẁ ẁ Ẃ ẃ Ẅ ẅ ẞ Ỳ ỳ /idotaccent`
...
Acknowledgements
GF Greek Glyph Sets defined by Irene Vlachou @irenevl and Thomas Linard @thlinard. Documented by Alexei Vanyashin @alexeiva January 2017.
GF Glyph Sets defined by Alexei Vanyashin (@alexeiva) and Kalapi Gajjar (@kalapi) from 2016-06-27 to 2016-10-11, with input from Dave Crossland, Denis Jacquerye, Frank Grießhammer, Georg Seifert, Gunnar Vilhjálmsson, Jacques Le Bailly, Michael Everson, Nhung Nguyen (Vietnamese lists), Pablo Impallari (Impallari Encoding), Rainer Erich Scheichelbauer (@mekkablue), Thomas Jockin, Thomas Phinney (Adobe Cyrillic lists), and Underware (Latin Plus Encoding)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file glyphsets-0.6.19.tar.gz
.
File metadata
- Download URL: glyphsets-0.6.19.tar.gz
- Upload date:
- Size: 1.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bcef60ccc0973e592433d32d46196e9409d08bab9932d254d4da9c3fcc3ab42a |
|
MD5 | 07dafe4700e0f1b5feb4020861ef7118 |
|
BLAKE2b-256 | 0efdf17c7c9c05f50eea557cbd8bb35c4e4a236091cbdb809e5efbb2a96a7aef |
File details
Details for the file glyphsets-0.6.19-py3-none-any.whl
.
File metadata
- Download URL: glyphsets-0.6.19-py3-none-any.whl
- Upload date:
- Size: 135.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e65cdf0198629f63effa0db1a445f4eb94eb9627e1854ad544658783070c9657 |
|
MD5 | 329a5f9a4eeeed81abc2ccf5a2d9d3f8 |
|
BLAKE2b-256 | 8cc966bff92587fe75894de1d71a1220394e4c6cf10fb9f759587d6387740fcb |