Oma is a font recognition tool. Using Apple’s Core ML it aims to recognise and display the fonts which are presented to it by using machine learning. It is designed to be a tool for designers and type enthusiasts to help them effectively identify a font. I have an intense interest in machine learning and what that might mean for our future so as well as being motivated to turn my hobby into a business it is also a personal stepping stone towards development and possible career shift into machine learning research.
I have found that working as a designer I have this need at least a few times a month, professionally. While on vacation I found that I often come across interesting signage and unusual type that I would love to know what it is but also I have found the need to store images and specific locations efficiently for research and inspiration purposes related to my professional work. I have also found that many of my friends, of which a large number are also designers had this need too. That is where the inspiration for Oma originated.
The working title of the app is Oma, derived from Ogma/Oghma, the Celtic god of writing, who was said to have invented the Ogham alphabet. The identity pays homage to sci-fi films and literature that specifically deals with AI and is reminiscent of Hal 9000.
As this app was going to be primarily using a mobile devices camera output, and almost act as a camera app in many ways, a logical step for me was to look at a lot of the research we had done on Obscura in regards to the ergonomics and physical handling of the app on the device.
I conducted surveys with a group of designers to find what their needs may be, and found that simplicity and getting the correct typeface with as little hassle as possible were the common answers. While I had proposed a multiple feature app, the simplicity of Oma was enticing to many but also the possibility of creating more products related to the design industry in the future was enticing to me. Which resulted in cutting down many of the features that I had proposed.
Simple to use. There are only 3 primary screens;
1. One for capturing photos
2. One for settings
3. And another for the saved results
The app also shows a live result when the camera is pointed at type, from there the user is prompted to save their result by capturing the image. The user is then linked to the type foundry where they can purchase the typeface. The captured images then exist in the photo library of saved results where detailed information can be found on each. An album feature is also available for organised and clear archiving.
I am hoping to ship a beta version to the TestFlight app later this year and get feedback from the community of developers and designers who have tested and are still testing Obscura.
In the meantime I will also be seeking legal advice in terms of using .OTF files to train my program and what implications that may have.
I intend to keep testing the interface design and expand the test beta group to include more graphic designers and typography enthusiasts and get feedback from a group who I would hope will use the product in the future. Potential partnerships are with Fonts in Use and the type foundry Grilli Type who have already provided feedback and guidance.
+353 (0)83 0520 621