The holiday is over and the camera full of photos and with this there comes the question of where to store the images?
You’d say, in iCloud or Photos app and that’s indeed a good place. I personally do not like to place my private photos on servers out of my full control, and I was afraid of using Photos app due to the closed box library iPhotos used to create. The later seems to become less of a problem since it stores the original files in the Masters
folder ordered by import date inside of its library.
So I’m using Photos app now for my memories again stored on local devices. But before that I used Image Capture.
Image Capture is a solid tool for syncing pictures to a local folder that comes with macOS. But files will keep their original file names. That’s why I created a little command line tool I’d like to share with you now:
Source Code
The tool is rather simple. By passing a source and a destination folder it will copy or move images. It then has options to put the files in folder e.g. by year and add some additional information like picture dimensions. By creating hash values it will ignore already handled files. This is great to make sure you did not miss any picture in case you saved your images over multiple places over the years ;)
To get back to our Photos library, this would be an example call to copy all originals from your library to a separate folder on the desktop:
photofolder --smart-copy -p -o ~/Desktop/MyPhotosBackupByYear ~/Pictures/Photos\ Library.photoslibrary/Masters/
Published on August 24, 2018
]]>Just a short note, that I started a separate blog for news related to my product Receipts. I hope you’ll subscribe there as well and learn more about the future of this product.
I had fun making the new website by using my Open Source project SeaSite about which I already wrote earlier in this blog: “Static websites the jQuery way”. Even though the whole project is made of static pages, it was easy to do with those helper.
For SeaSite I added a localization plugin, to be able to reuse templates for English and German. In order to better respect the GDPR, I added features for load stuff lazy “on click” into the pages like YouTube videos or Disqus UI.
Another very useful feature of SeaSite is the outline generated when parsing Markdown. This way the help pages got a nice table of contents as a side info. The help document itself was edited in Typora, I wrote about the approach earlier in this blog.
Finally the search is also a nice feature that does not require any dynamic code on the server. SeaSite extracts all headers and contents from the created static pages a final step and puts the data into a JSON file. That is loaded lazily only if the user starts typing. The Fuse.js library adds the search logic and fuzzy string support.
Published on August 31, 2018
]]>A super simple approach
In the web world, especially in React projects, the state
of the app is an essential ingredient. The idea is that all visual representation originates from the current state i.e. if the state is changed the UI is likely to do as well.
In the Cocoa world this pattern is usually does not matter if you start a new project. There is some API like UIStateRestoring, but it will require some extra work.
I sat down and tried to build something that is super simple and maintainable. It should provide the following features:
Ok, so first of all we need the state itself, I used my SeaObject
implementation I wrote about before. Therefore it already covers the requirement “store and restore states” out of the box. Here an example:
@interface State : SeaObject
@property NSString *searchString;
@property NSString *currentViewID;
@end
In this simple example we would store a search string and a pointer to the currently visible view controller.
Now we need to put that state somewhere. NSDocument
seems to be ideal for that. To access it from any view controller we create a sub class of NSViewController
we will use throughout the project and add a property called document
to it. The following code will set it for us:
- (void)viewWillAppear {
[super viewWillAppear];
self.document = self.view.window.windowController.document;
}
- (void)viewDidDisappear {
self.document = nil;
[super viewDidDisappear];
}
We can now access the state via self.document.state
or even bind values to it. In the demo code we added a little helper for observing state changes, which can be used like this (for the keyPath
trick see this blog article) :
[self observeKeyPath:keyPath(self.document.state.searchString) action:^(id newValue) {
[self performCustomSearchWithString:document.state.searchString];
}];
That’s basically it, just the navigation is missing and this is super easy if we start to store the states into a custom NSUndoManager
. These are the two methods needed:
- (void)restoreState:(State *)state {
[self storeState];
self.state = state;
}
- (void)storeState {
if (![self.state isEqual:self.lastStoredState]) {
self.lastStoredState = self.state.copy;
[self.stateStack registerUndoWithTarget:self
selector:@selector(restoreState:)
object:self.lastStoredState];
}
}
Now calling [self.stateStack undo]
will restore the state to what it was when you called storeState
the last time. This way you can have your marks for when it makes sense to store a state.
Source Code
In the example code on GitHub you will find some more additional stuff like setupController
and cleanupController
methods where to put the observers and do other stuff once the state becomes available or goes away. The example also contains code that shows how firstResponder
can be restored.
I would love to get your feedback on that approach. Of course I’m not the first to reason about states, see e.g. obj.c App Architecture for other approaches.
Published on September 17, 2018
]]>Since macOS 10.14 Mojave different visual modes like dark and light are available for the desktop. Native app developers can use named system colors to get the best fit for both semantic colors (like “window background”, “secondary label”) or plain colors (like “red”, “green”).
To adopt these colors for web development it is useful to get access to their color values for use in CSS. That’s what my new project DesertColor at GitHub is providing.
dessertcolor.css
contains these extracted colors. With <html data-mode="dark">
the dark variant is used, otherwise the light one. The colors can be accessed by their name:
body {
color: var(--label-color);
background: var(--control-background-color);
}
But be aware, that Dark Mode in Mojave is not only about plain colors and that there is more like “Desktop Tinting” or “Translucency”.
Source Code
Published on September 25, 2018
]]>Certainly any macOS or iOS developers have used a framework in their project and Cocoapods makes it even easier to use 3rd party ones.
But did you create one yourself?
You really should, because it has some great advantages:
The benefit I like the most is that creating frameworks of certain areas of your code base forces you to precisely separate it from the rest of the code. Often the code - especially old and long grown one - has references to different parts of your project. Creating a framework is giving the opportunity to clean that up and ask yourself: “Does this still make sense this way?”
The outcome of creating a framework should also be to get a clean and minimal API. Minimal in the sense of: “What is it that is really required to execute that functionality?” You’ll be surprised how simple things may become.
The main purpose of a framework is to reuse it over again in other projects. This indeed makes a lot of sense and can pay out for the upfront investment of creating a framework in the first place.
For example I have some code that provides logging and debugging features I like to use as my basic toolbox. It also contains categories for objects I need all day, like string manipulation or date parsing and formatting. That is my first import on a new project. And also other frameworks I wrote use this base framework as well.
A project should always be free of warnings and errors, otherwise you’ll not notice if some little problem that already showed up as a warning - which you ignored because it was hidden under dozens of others - causes you headaches a while after. But sometimes it is hard to get rid of warnings, because they would require a lot of workarounds or live in third party code you don’t want to touch. In your own frameworks just turn these warnings off, once your code ist stable and will not get changed any more.
Writing test suits for frameworks make sure you cover their functionality completely. You can focus on the single thing it is supposed to do and tests will not float around in a list of others in one big main project.
As a real life example I would like to talk about my projects PDFify and Receipts. PDFify was actually made to have a test environment for my OCR and PDF features I offer in Receipts. I separated OCR, PDF extraction and creation, Mail Parsing and the Scanner Interface and made a framework for each of them. I was able to write test cases specific to the different topics thus having a minimal functionality I can focus on. I can even reuse most stuff for iOS as well or for future projects to come. Sharing the code with other developers is also much easier this way.
I found it easier to add all frameworks to the main project, even if a framework used sub-frameworks. The problems I ran into were related to signing.
Frameworks finally got better support on iOS as in the early days, but there are still some pitfalls to care about:
OTHER_LDFLAGS = -all_load -ObjC
MACH_O_TYPE = staticlib
Frameworks also work great with HockeyApp (I know, this service is going to get replaced, but it is still first choice for me right now). Get more info here: https://support.hockeyapp.net/kb/client-integration-ios-mac-os-x-tvos/how-to-solve-symbolication-problems
Published on October 11, 2018
]]>After putting some energy into optimizing my websites receipts-app.com, pdfifiy.app and holtwick.de I’d like to share my experiences/.
Putting defer
into script loading tags will load the scripts when the DOM is ready and in the order they appeared in your HTML. Put them in the head
so the browser gets to know about them early and can already start loading while the rest of the page is still to be loaded and prepared.
But be aware that JS object might not be available for inline scripts. For example if loading jQuery
using defer
the $
will only be available after DOM loaded! So best approach is to wrap inline script like this:
window.addEventListener('load', () => {
// Your code
})
The page should start rendering only when the CSS is loaded to avaiod flickering effects. It is hard to tell which CSS styles are really used by a page. Usually the custom CSS is not super large, so consider inlining it to avoid an additional request and have it ready eraly.
Even though you are not using a CDN (Content Delivery Network) for your website itself it can speed up if you use it for common CSS and JS like jQuery or Bootstrap. But if the rendering does not depend on JS code, it is also absolutely ok to serve from the same server as the page.
Remember the defer
tip from the beginning, it will also work for those
Of course reducing the size of files is always a good idea for production sites. UglifyJS does a good job for JS and
If the resulting CSS isn’t to big it absolutely makes sense to integrate it into the HTML header, which will improve the “first meaningful paint” benchmark and thus giving the user the impression the page loaded much faster.
Dropping image resource on ImageOptim will usually have good results on reducing their file sizes which also helps speeding up page load.
Using more modern image types like WEBP is also possible but requires more changes in the HTML like using <picture>
element etc. and will not be supported by all browsers yet. The work might not we worth the benefit right now.
Not everything needs to be available on the first step. With some Javascript magic image loading might be a good thing to defer. But also 3rd party integrations like embedded YouTube videos or Disqus discussion groups don’t need to be loaded immediately. On receipts-app.com for example I provide an offline search, which also only loads if the user types some words. The user experience is usually still the same.
Tracking etc. should not slow down you page loading. Put it just in front of the closing </body>
tag.
While optimizing all the details of the page you might forget about the server side. If you see a big “Waiting Time (TTFB)” in your Network Inspector this might be due to things going on on the server.
For example this could be a PHP script connecting to a database. I recommend profiling the PHP calls, which is simple using XDebug and and IDE like IntelliJ. In my case I opened the database without needing to. Another quick fix was to open MySQL via 127.0.0.1
instead of localhost
.
Don’t forget configure a caching policy for your site that makes sense. Also compress the outgoing traffic, especially the HTML.
To see if the optimizations had an effect you could e.g. use the following tools for measuring, just to name a few:
For my own projects I use my own static web site builder: SeaSite. It does all the repeating tasks for me and is easy to maintain. Read more about it in this blog post.
Published on October 25, 2018
]]>Recently I see some discussion about Electron on macOS (John Gruber, Dave Verwer). I have some experience in the field and will share my view on the topic and a practical introduction here.
In general web technologies are a de facto standard, not only in classic web browsers. There are some valid reasons for that:
So it takes no wonder to understand that developers, but also product managers, have some sympathy for the platform.
As soon as server driven service wants to create a desktop application it seems natural to start an Electron project and point to the service’s URL. This of course has no additional value for the user besides having the app in the Dock.
But Electron can do more, as Visual Studio Code and others demonstrate. As soon as the app makes use of the file system, native menus and other features, it comes closer to a real desktop app.
The look and feel these days isn’t that important anymore since the skeuomorphic design world transformed into a flat one.
But there are some disadvantages as well:
I recently started a new project called OnePile and also had to make a decision of which technologies I’d like to use. I had written many pure native apps like my latest ones Receipts and PDFify, but I have to admit I like the latest Javascript Syntax very very much.
Since I have a lot of code I would like to reuse (OCR, Scanner Dialog, Share Extensions, PDF Logic) I really just need it for the UI. So Instead of taking Electron and trying to put my stuff in there, I have chosen to use WKWebView and here is my story:
First of all I need to set up a connection between the native (Objective-C) and web view (Javascript) sides, which I named: Bridge.
Sending to Javascript is easy as calling evaluateJavaScript:completionHandler: For the other way I use WKScriptMessageHandler.
As you can see there is some asynchronous code involved and this is, where my bridge code comes into play. It defines some loose protocol of how the data that is send needs so be structured, for example:
{
"action": "fetchData",
"responseAction": "tmp-123",
"payload": { "skip": 0, "limit": 100 }
}
This would basically work in both directions and mean, that fetchData
should be called and the response be send to responseAction
. These actions will registered with the bridge code. Their return value will be the payload for the response call. The response actions will temporarily be registered and deleted after successfully having received a response or having reached a timeout.
Using this technique it is quite easy to trigger native things like menus or popovers. I can also easily access my existing components. Also, all the database handling is done on the native side.
This works well for data that does not exceed some Kilobytes, but for providing images it would slow down the process. Therefore, I also implemented WKURLSchemeHandler, not only for images but also for the HTML pages that make the basis for the web app part.
Another nice benefit from it is that it can be used to trigger heavy computation resulting in large data. In the case of OnePile I use it to render PDF on the fly to PNG.
But Apple does not only offer WKWebView for hosting Javascript, it also provides the slick JSContext for headless code. I also make use of that for example to extract data from the notes to build the full text index on the native side.
With some fine-tuning you can nicely catch exceptions and pass logging to the native log for better debugging experience. I was also able to add support for require
to dynamically load modules into the JSContext.
And the best: all this applies to iOS as well as to macOS. You can actually reuse the whole glue code. With Electron it is also all or nothing where with this approach you may decide view by view if you want to use it or not.
For the discussion about Electron and WKWebView it does not matter, which frameworks to use on the web side, but I just want to express my love for Vue.js here 😍 For a macOS developer used to bindings this is a nice and powerful tool to feel right in the shiny web world.
Would you like to see it in action? No problem, you can, by subscribing to the early preview of my new app Collect:
Subscribe for preview of OnePile App
Update 2018-12-27:
This article has received great feedback on Hacker News with very interesting and often profound discussions.
In the previous version of this article I said, Electron is based on Chrome which is owned by Google, which was not correct. It is based on Chromium which itself is the basis of Chrome, but the influence of Google on the code is still strong. Even though I don’t think Google is intentionally putting bad code into the project my intention by mentioning this point was, that the code basis is so large that usually nobody using it for a web project would ever do a code review before using it. The same of course applies to WKWebView, but since it is part of the operating system Apple ships, I think if you trust the OS at this point you can trust this component as well.
One argument I often heard has been: Your approach is not cross platform. And indeed this particular native implementation only works in the Apple ecosystem. Similar approaches in other environments can be achieved with reasonable effort. Projects like Cordova or Capacitor are built this way and there are more examples available to start with. But this is just a thin layer and the web code is the one that will grow and this is truly cross platform. In case of OnePile I might even build Windows and Linux clients on Electron first, but the web code will still stay the same for all destinations apart from some glue code for UI like menus.
Published on December 20, 2018
]]>Since the beginning I have been using and loving HockeyApp, because it was made by very skilled developers who exactly knew what makes the live of a developer easier. It started with symbolicated crash reports, which previously were difficult to get and improved the overall quality of software products that integrated and still integrate with it.
Success attracts buyers and so HockeyApp was acquired by Microsoft and is now transitioning to their own platform called AppCenter, which will certainly do a good job in the future but isn’t yet where HockeyApp is right now.
I took that as an opportunity to look out for alternatives and found Sentry. First of all it is OpenSource and can be installed on premise. But usually you will go with their service which comes with a reasonable pricing.
It has a nice and clear web interface and supports everything I need, especially Objective-C and Javascript, which is a great fit for my Electron alternative.
The integration is easy e.g. by using their CocoaPod. With a few lines of code and providing the DSN of your project you can send Events. In my case I also wanted to let the user decide if he is ready to share the data and therefore added an NSAlert
in the setShouldSendEvent
block, which looks similar to this:
[SentryClient.sharedClient setShouldSendEvent:^BOOL(SentryEvent * _Nonnull event) {
__block NSInteger ret = 0;
dispatch_sync(dispatch_get_main_queue(), ^(void) {
NSAlert *alert;
// ...
ret = [alert runModal];
});
return ret == NSAlertFirstButtonReturn;
}];
With some more code the user sees this:
What I liked a lot at HockeyApp was their feedback dialog which allowed the user to give information about what situation lead to the problem and that I could begin a dialog with them and involve them into beta testing the fix. Mainly I did this using the great support tool from my friends at replies.io. I wrote about Replies earlier in this blog.
So my goal was to integrate replies.io again with my new Sentry setup and this is what the Send and Support
button is doing. It opens the support for of replies.io and the user has the chance to get in contact and provide more info. The link between the ticket on Sentry and the one on replies.io is the userId
set for both services, which is a unique ID per client.
Another aspect I changed while integrating the new service, was the tracking of errors. I’m referring to those errors you would usually write to the log file and only see if the user sends the log. This is very valuable information that you should use to improve your software.
I’m using a logging framework, which I described earlier here in this blog. So the most natural thing to do to hook into the logError
part and create an event for Sentry:
- (void)logLevel:(SeaLogFlag)level file:(const char *)file function:(const char *)function line:(NSUInteger)line context:(NSString *)context message:(NSString *)message {
int slevel = 0;
if (level == SeaLogFlagInfo) slevel = kSentrySeverityInfo;
if (level == SeaLogFlagWarning) slevel = kSentrySeverityWarning;
if (level == SeaLogFlagError) slevel = kSentrySeverityError;
if (slevel) {
SentryBreadcrumb *bc = [[SentryBreadcrumb alloc] initWithLevel:slevel category:@"log"];
bc.data = @{ @"message": message ?: @"",
@"file": [NSString stringWithFormat:@"<%@:%@>", @(file ?: ""), @(line)]
};
[[SentryClient.sharedClient breadcrumbs] addBreadcrumb:bc];
}
if (level == SeaLogFlagError) {
[SentryClient.sharedClient snapshotStacktrace:^{
SentryEvent *event = [[SentryEvent alloc] initWithLevel:kSentrySeverityError];
event.message = message;
[SentryClient.sharedClient appendStacktraceToEvent:event];
[SentryClient.sharedClient sendEvent:event withCompletionHandler:^(NSError * _Nullable error) {
;
}];
}];
}
You might notice the breadcrumb object. This is the way to pass logging info to Sentry, which I do by creating breadcrumb objects for log on the info and warning level.
In order to get the most out of the stack trace the service should symbolicate the crash report and show the line numbers, where the problem originated. HockeyApp had its own awesome macOS app doing the job. But Sentry also provides a command line tool that integrates well with the archive step in Xcode and confirms the upload with a notification:
The same applies to Javascript. The integration again is straight forward, but I encountered one problem, because I was using it in WKWebView. But setting the transport
option fixed the issue for me:
Sentry.init({
// ...
transport: Sentry.Transports.FetchTransport,
})
It is also important to get the release
right. With some wepback and process.env
tricks it worked quite well to get the version info from package.json
as release information in the distributed code.
Of course you should upload your SourceMaps to Sentry to get the most out of the errors. Also the breadcrumbs will automatically be generated from the console.xyz
calls.
I enjoyed modernizing my crash and error handling code and infrastructure and finally get more valuable info out of the logging data. I am also happy to have found a good way to respect my users privacy and leave the decision to them if they would like to share their data or not, since my products may contain sensitive data and therefore it is important to win the users trust.
Sentry seems to be a good alternative to HockeyApp. I would have stayed with HockeyApp, but since they are migrating to a new platform, I felt like some effort to put into migration for my projects would have come anyway, so why not doing it right now. That Sentry is OpenSource is another plus on the list.
If you would like to see it in action - although I hope you’ll never need to see the dialog 😉 - please try my products:
Published on January 7, 2019
]]>Universally unique IDs are great. Used in a database in place of the primary key you don’t need to care for conflicts anymore, if more than one application instance is involved. But also inside an app they are comfortable object identifiers.
In general the version 4 described in RFC 4122 is used, which is composed by time related data and random bytes. For macOS and iOS NSUUID provides a simple-to-use interface. For Javascript projects node-uuid is widely used.
A drawback is that it is usually is used in form of a string. An incrementing integer instead is very likely managed more efficient and faster by databases and systems.
UUIDs are mostly presented in a hexadecimal notation like f81d4fae-7dec-11d0-a765-00a0c91e6bf6
which is a 36 characters long string. In binary representation it is 16 bytes or 128 bits long. For most environments this does not fit into a regular integer type anymore.
Since each byte counts if you have lot of data it makes sense to put some effort into reducing the size. Stripping the -
already saves 4 bytes. But to safe most, without getting into difficulties about string encoding and the like, Base64 looks like a good choice.
But characters like +
and /
and =
become painful when used in URLs or as file names. So let’s replace +
by -
and /
by _
as Python does for ages already. Then strip the trailing ==
and we get a 22 characters long string, which is 1.375 times the size of the most compact representation in binary form and significantly smaller than the original hex representation.
Here is the implementation in Objective-C:
+ (NSString *)uuid {
uuid_t uuid;
[NSUUID.UUID getUUIDBytes:uuid];
return [[[[[NSData dataWithBytes:uuid length:16]
base64EncodedStringWithOptions:0]
stringByReplacingOccurrencesOfString:@"=" withString:@""]
stringByReplacingOccurrencesOfString:@"+" withString:@"-"]
stringByReplacingOccurrencesOfString:@"/" withString:@"_"];
}
And this one if for Javascript:
import uuid from 'uuid'
const rxUUIDReplace = /[+=\/]/g
const mapUUIDReplace = {
'+': '-',
'/': '_',
'=': ''
}
export function UUID() {
const array = []
uuid.v4(null, array)
return toBase64(array).replace(rxUUIDReplace, m => mapUUIDReplace[m])
}
Happy uniqueness ;)
Published on January 7, 2019
]]>Everybody is talking about Marzipan and the amalgamation of UIKit and AppKit. This step is a logical consequence for Apple. But this is still just beneficial for the Apple universe and cross platform is much more than iOS, macOS, tvOS and watchOS.
I write macOS and iOS apps for over a decade now, but for my latest project I decided to choose a different approach. I decided to use web technologies in the front end, as I described in this previous article. You might say: Oh, you are using Electron and Cordova. Well, almost.
Instead of using existing web app wrappers I wrote my own. This might sound stupid, but it has some benefits. First of all the footprint on disc of these apps is much smaller. Then I have full control over native features, even the newest ones. And I can reuse a lot of existing native code as well.
For the front end I use HTML, CSS and JS. If you did not use it for a long time you will be surprised how much it has matured over time. Javascript in its current form of ECMAScript7 (ES7) and later has evolved to a super powerful yet sexy language. All the packaging is greatly performed by webpack, which you might refer to as the compiler of web apps.
But what indeed is a big difference is the fresh wind blowing in user interface concepts. ReactJS and VueJS are certainly the most production ready frameworks. You need to change your mindset in order to not try to reach out to a certain view and manipulate its properties. Instead you have a state and derive rendering of everything from it. The framework is doing updates in a smart and efficient way in a 60fps manner. This allows to split large project into smaller easier to handle pieces. And these little pieces can be reused on all platforms, even if the visual appearance has to be different.
For mobile devices I found a great UI framework called Framework7, which does a fantastic job. It comes really close to how the native UI looks and feels. But you benefit from the full flexibility of a web app. And you just need to set up the UI once and it looks great on both iOS and Android out of the box.
So for now I’m super happy with this approach and first results are in the wild at onepile.app.
Published on March 14, 2019
]]>Most CSS Frameworks offer a layout system based on columns for creating text and content based websites. But for a single page web app the requirements are different. App development frameworks offer various techniques to structure the contents of a window, a simple yet powerful one is stack based view layout.
Let’s start with a common example that you might find in a mail or notes app. You have multiple columns like a sidebar, a document list, a content area. The list might also have a search field. An additional column to the right gives detailed information.
<div class="app">
<div class="sidebar">
<div>Sidebar</div>
</div>
<div class="middle">
<div>Search</div>
<div class="list">
<div>Item1</div>
<div>Item2</div>
<div>Item3</div>
</div>
</div>
<div class="content">
<div class="menu">
Content related menu
</div>
<div class="text">
Text
</div>
</div>
<div class="info">
<div>Info</div>
</div>
</div>
Each group has a different orientation for its children. All children of app
are placed horizontally. Those of middle
and content
are vertically. We will introduce hstack
and vstack
CSS classes that will define the appropriate layout by using flex box. This is the code in SCSS:
.app, .hstack, .vstack {
display: flex;
&, > * {
flex: none;
overflow: hidden;
}
}
.app, .hstack {
flex-direction: row;
}
.vstack {
flex-direction: column;
}
The default for flex
is auto
which means, everything grows and shrinks as required. But we instead want to give some columns a fixed width or height and flex: none
will do that. For those elements that we want to grow or even be scrollable, we will add the following options:
.-grow {
flex: auto;
}
.-scroll {
overflow: auto;
}
Now we can add these options to list
and content
like this:
<div class="content -grow -scroll"></div>
That’s basically it. You will find a demo at JSFiddle here. It also shows how to add separators and how to prepare html
and body
correctly.
Published on March 15, 2020
]]>Last year I started video chat projects. First peer.school with which I wanted to contribute to the education of younger children. Then from that Brie.fi/ng, which focuses on secure communication.
Briefing in particular has found many friends on Github and is now available in several languages, including Chinese and very recently Russian.
The project has a very simple design and uses the Javascript framework Vue. All the necessary functions are available, plus even such exciting features as hiding or swapping a person’s background.
Some native apps are also available, but among them the iOS App with support for AppClips (enter rooms via QR code or NFC transmitter) is especially worth mentioning.
Besides sharing a URL to enter a shared room, there is also the possibility to embed Briefing directly into your own website. A simple IFrame makes it possible, like this one:
<iframe
src="https://brie.fi/ng/cool-call-62?audio=1&video=1&fs=0&invite=0&prefs=0&share=0"
allow="camera; microphone; fullscreen; speaker; display-capture"
></iframe>
For easy configuration, there is now an IFrame Tool where anyone can quickly configure their code themselves with a few clicks:
If you want to have full control, you can also install all the components that are necessary for operation yourself. Instructions can be found in the README on Github.
Briefing is open source, but under the EUPL v1.2 license, which is a European answer to the GPL. This means that changes have to be published under the same license.
However, if it is desired not to publish changes, it is certainly fair to acquire a commercial license that offers these freedoms. Everything about this can also be found in the README.
Currently I’m working on a third product that should follow in the footsteps of Briefing. The focus is on better scalability, privacy and encryption, and relatime collaboration. The development is already well advanced. More about that soon here…
Published on April 16, 2021
]]>LanguageTool is the secret weapon for producing error-free texts in different languages. Personally, I’m at war with many a grammar rule and am glad when the machine warns me before it gets embarrassing 😅
However, I believe not everything I wrote needs to be sent to a service that is great, but not under my control.
So here are some tips to set up and use an appropriate server yourself.
The easiest way to set up your own server is via Docker. This package offers itself: erikvl87/languagetool. And this is how it is loaded and started:
docker pull erikvl87/languagetool
docker run -d --restart unless-stopped -p 8010:8010 erikvl87/languagetool
Great, now the server can be reached at http://example.com:8010. There is more background information at https://dev.languagetool.org/http-server
The extension LanguageTool Linter provides all necessary functions to work with LanguageTool. Here the server URL can be entered now. The editing of texts and Markdown becomes much more comfortable.
The following adjustments to the settings have proven useful:
{
"languageToolLinter.external.url": "http://example.com:8010",
"languageToolLinter.hideDiagnosticsOnChange": true,
"languageToolLinter.hideRuleIds": true,
"languageToolLinter.lintOnChange": true,
"languageToolLinter.lintOnOpen": true
}
There are numerous plugins available for browsers and other apps, see https://languagetool.org/#plugins. In the Firefox extension, you can enter your own server under “Experimental settings (only for advanced users)”. The URL must be added to the version path, in our example: http://example.com:8010/v2/
.
Published on April 15, 2021
]]>I recently read about a “massive fake news machine” that the German Foreign Ministry claims to have uncovered (see below). It can hardly be distinguished from legitimate news.
It occurred to me spontaneously that if you are not able to stop manipulated news, you should at least be able to identify legitimate news.
The idea is simple: let’s create a digital network of trust using cryptographic means.
The simplest form of signing is the calculation of a hash values. So let’s assume I want to tweet as @holtwick: “I don’t like fake news!” and my secret would be “Gurkensalat” (usually called “salt”). In the terminal it would go like this:
echo -n "@holtwick I don't like fake news! Gurkensalat" | shasum -a 256
Alternatively with OpenSSL:
echo -n "@holtwick I don't like fake news! Gurkensalat" | openssl dgst -sha256
The result is:
55bc8fe5493377787bfc0be29417fd692070dc8446b855d62a9fedca5b536d53
This can also be calculated before I tweet it. Then I take part of the result, e.g. the first 8 characters, and send it with the message, perhaps with a distinguishing feature such as a preceding ~
character:
@holtwick
I don’t like fake news! ~55bc8fe5
Now it’s time to verify something like this. A small service on your own website would be conceivable. Here is a small example in PHP of what an online check could look like:
<?php
// This one is top secret :)
$salt = "Gurkensalat";
// Normalize whitespace to avoid copy paste issues
$sample = trim(preg_replace('/[\t\n\r\s]+/', ' ', $_GET["text"])) . " " . $salt;
// Calculate SHA256 and only use the first 8 chars
$hash = substr(hash('sha256', $sample), 0, 8);
// Compare hashes and return result
echo strcmp($_GET["hash"], $hash) == 0 ? "ok" : "invalid";
?>
The resulting URL would then be: https://holtwick.de/experiments/id.php?text=@holtwick%20I%20don%27t%20like%20fake%20news!&hash=55bc8fe5
Ok, that was a relatively primitive implementation to illustrate the idea. For a serious application, other techniques would certainly be used, such as a public key procedure or blockchains. Of course, there are already services that do something similar, as this overview from the Federal Network Agency shows.
Warning
I’m no cryptography expert and I’m sure I’ve overlooked some attack vectors, but I still think that there are technical possibilities that can at least make it more difficult to claim things in a false name.
Published on February 7, 2024
]]>I’d like to share some interesting findings I made while implementing a cross platform crypto format.
Usually you will need these features from a crypto implementation:
SHA2
PBKDF2
AES
This is a pretty basic set of requirements and most crypto implementations support these. But as always the devil is in the details, because if you make a small mistake you will just get a wrong result and this result will not help you find the cause of the bug. It is wrong or it is right. Basta.
So cryptography is applied on raw binary data. This may already be the source of troubles:
UTF-8
?Base64
and Hex
helpers correctly working?On iOS and macOS you will usually use NSData
which is fine. In the Javascript world you will soon be in trouble. If your target is node.js you can use Buffer
which is quite nice, but on Web you will find yourself using Uint8Array
pretty soon, which has no nice native toolset for converting between UTF-8 strings, hex and base64 presentations.
Random data makes it harder for an attacker to crack encrypted data and passwords, so using initialization vectors (IV) for encryption and a salt for password generation is a good practice.
But IV support on node.js is totally broken and ends in an exception for certain circumstances. I was pretty surprised about that, but I was not able to find a workaround, which made me switch to WebCryptography.
Another trap you can fall into eventually while testing is exactly this randomness of IV and salt if you forget to preset those in your test cases, because the result obviously will always differ if different random elements are created. Sounds obvious, but will happen ;)
You will read a lot about which algorithms are the best, but in real life you’ll end up with taking what is there and implemented on all platforms you are targeting.
For checksums SHA256
and SHA512
are standard. For keyword derivation you will usually only find support for PBKDF2
everywhere. For symmetric encryption the standard is AES
. But it comes with different flavors. I found CBC
was a good compromise that is available almost everywhere. CTR
is also pretty widely supported.
Ok, you now have what you need and start happy encryption. It would be great, if all you need were self-contained, but usually it is not. You will need to store the salt somewhere to rebuild your key and you will also need the IV to decrypt your data.
So let’s store the IV with the encrypted data, like: IV + cipherText = package
.
But we should also make sure the data cannot be manipulated, therefore we add an HMAC
over the data of the previous package and add it as well: IV + cipherText + HMAC = package
.
That’s nice. Now you can use the handy tools I mentioned in Binary Data to deconstruct it for decryption ;)
I’m not a crypto expert and I’ll be very happy to hear your feedback about it e.g. on Twitter @holtwick.
Published on February 13, 2018
]]>I love Vue.js from Evan You and I like static websites. Of course there are already solutions to combine these two passions like VuePress or Nuxt. But would I be a programmer if I would choose this simple way?
Of course I wanted to get to the bleeding edge and was quickly inspired by Evans newest coup: vite. It throws the ballast of the webpack overboard and does everything right. First I tried my luck with it and vitepress, but unfortunately that was not quite what I was looking for.
So I took a step back and looked at the classics of static website generation: Gatsby, Hugo, Jekyll and 11ty. They did everything right too, but everything didn’t come off the shelf as I would like it to. Especially since I had already built my own solution for SeaSite, with which all my websites were generated.
But what was it that I wanted? I have found out the following points for me:
Well, at point 1 I had already discovered esbuild in vite. It is so incredibly fast that I could not believe it. The result is also reliable and exactly as it should be. esbuild was set as a tool that I wanted to use.
So I first built a small Node.js script that transpiled a Javascript file. I also built a small library to register routes. The generation of the content should be done on-demand when the website is requested by a simple Express.js webserver. To generate the static pages I would simply generate and save the content for all registered routes. This worked great and took only milliseconds.
Quickly I wanted to have the comfort of vite, i.e. when files change, the browser reloads immediately. With Chokidar I could watch the folder with the JS files and recompile everything via esbuild. With a little trick, the import cache of Node.js could be bypassed and the new JS could be loaded and executed. With socket.io a reload mechanism for the browser was quickly assembled.
I had now finally caught fire and there was no turning back. Then it could also become more beautiful :) Unfortunately I didn’t succeed in integrating Vue.js at the first go, but I also doubted if this would make sense at all. In SeaSite I had already used JSX and JSDOM. For another project I had already written a DOM abstraction, which is very lean. I now extended it in a way that HTML and XML could be generated easily with JSX.
This made it possible to manipulate the content with simple DOM actions. But how much nicer it would be, if the corresponding nodes could be found by CSS selectors. So I also implemented the css-parse and it worked fine.
Also a markdown parser was already available from SeaSite and was only extended to provide meta data for the registration of routes while maintaining the pleasant speed.
So now everything was on board that was needed and it was time to create a simple unified structure to publish the project. A first goal was to describe the routes with simple data structures to get maximum flexibility. For common formats like HTML, XML, JSON, text and assets convenient methods were created.
Since everything had the appearance of a web server anyway, which can also spit out static pages, it was obvious to adopt the smart middleware pattern of Koa.js. This way, templates and plugins are easy to realize. A copy of the data structure mentioned serves then as context and the result is expected in the property ctx.body
.
Here it is now, the final project. I would be very happy about help and ideas. Maybe it is not the greatest tool to create static websites, but maybe it is the basis for an even smarter solution that builds on it.
https://www.npmjs.com/package/hostic
In the coming posts I will further explore some of the issues that arise when creating a website and how they can be solved with Hostic. The list of current ideas on topics:
These websites are already driven by Hostic:
Published on September 2, 2020
]]>For every developer there comes the moment where speed matters. It saves you a relevant amount of time and keeps the flow going.
esbuild is definitely fast and reduces the built time significantly. And it is nice and simple too, when it comes to set up.
It can be started from command line or nicely being integrated in a node.js script like this:
const esbuild = require('esbuild')
const options = {
target: 'node12',
platform: 'node',
jsxFactory: 'h',
jsxFragment: 'hh',
bundle: true,
outfile: 'out.js',
sourcemap: 'inline',
loader: {
'.js': 'jsx',
'.css': 'text',
},
entryPoints: [`${sitePath}/index.js`],
}
await service.build(options)
This will build a single JS file containing everything that is needed to run. It also translates JSX and uses the function h
to create elements. It also loads files ending on .css
as plain text. A source map will be written as well. All this is done in a fragment of a second! This is because esbuild is written in Go instead of Javascript, because speed matters sometimes.
Speaking of source maps the same author of esbuild also wrote a module to support them on node: node-source-map-support.
Now the setup is almost complete, but how about testing? I usually use jest for testing and therefore I wanted to get it working here as well. The solutions available did not fit my case, therefore I wrote my own transform:
First make sure to tell Jest what to do in a package.json
section:
{
"jest": {
"transform": {
"^.+\\.jsx?$": "./src/jest-transform.js"
},
"testEnvironment": "node",
"testPathIgnorePatterns": [
"node_modules/",
"dist/"
]
}
}
The transformer looks like this:
// Inspired by https://github.com/aelbore/esbuild-jest#readme
const fs = require('node:fs')
const esbuild = require('esbuild')
const pkg = require('../package.json')
const external = [
...Object.keys(pkg.dependencies ?? {}),
...Object.keys(pkg.devDependencies ?? {}),
...Object.keys(pkg.peerDependencies ?? {}),
]
module.exports = {
getCacheKey() { // Forces to ignore Jest cache
return Math.random().toString()
},
process(content, filename) {
esbuild.buildSync({
target: 'node14',
platform: 'node',
jsxFactory: 'h',
jsxFragment: 'h',
bundle: true,
outfile: 'out-jest.js',
sourcemap: 'inline',
loader: {
'.js': 'jsx',
'.css': 'text',
},
entryPoints: [filename],
external,
})
const js = fs.readFileSync(file, 'utf-8')
fs.unlinkSync(file)
return js
},
}
Why would you want to you use esbuild and not webpack, babel, rollup, etc.? Well, because it is fast and easy to use. The other solutions are blown up and become pretty complex after a while. They have many 3rd party dependencies, which can cause troubles as well.
If you want to experience the blatant acceleration, then try esbuild.
Published on August 20, 2020
]]>The most valuable and most successful companies are IT companies: Apple, Google, Amazon and Microsoft. All based in the USA. China, another huge market, is the production site of most hardware, but also the seat of companies that successfully take the place of US companies: Alibaba, Tencent and more. Emerging markets also have their own market, where online business takes place. KaiOS is an example of an innovative model that is hardly known in the first world.
Where does Europe rank? There are no significant online platforms. Businesses and administrations make themselves dependent on software from the USA and hardware from China. Creative entrepreneurs and talented developers are migrating to the USA. In the long run, Europe will become less relevant as an innovator and remain an importer of technology. This is disastrous.
Europe needs a strategy and the political will to change this trend. But the potential and the resources are there to make Europe more independent in a changing global world. In the following I would like to present my personal thoughts on such a strategy.
Europe should under no circumstances try to copy the USA or China. The existing successful solutions are basically centralized solutions. This means that services are offered, where the data of the customers are in the access of the operating companies. No matter how well the privacy promises on the websites can be read, de facto there is no protection of the data from the companies or the authorities of the country where the company is located - usually the USA or China.
Data is the commodity of the information age. The first difference that Europe should make in its strategy is the absolute protection of this data. Users should have full access and control over their data so as not to become dependent on individual companies. This can be achieved through decentralized solutions and strong encryption.
It must be possible to display and edit data. Computers, tablets, smartphones and, over time, other devices will be used for this purpose. It is important which operating system is used, because this is the foundation on which all solutions are built. Microsoft, Google and Apple have a [monopoly] (https://de.statista.com/themen/783/betriebssysteme/), all of them based in the USA. Europe should focus on its own free platform, with Linux and Android as a first starting point.
But besides the operating system, another secret operating system has emerged: the web browser. Originally a European invention, it has revolutionized the access to information on the Internet. In recent years, complexity has increased and most computer services can now be accessed using web technologies. The programming is easy to learn and the knowledge of it is widely spread. Now with WebAssembly there are no real performance problems anymore.
Europe should take advantage of the fact that there is the proven web platform. However, after the weakening of Firefox only one browser engine is relevant and this is controlled by Google and Apple. Europe should develop its own open engine in the short term, for which Servo is the obvious choice. In the long term the operating system as a whole could be replaced by a web engine, following the example of Google Chrome OS.
It is also important to be able to trust the devices themselves. In many chips there are own small operating systems, which take over important functions and normal developers do not have access to the internals. It is important to become more independent in development and production, preferably with an open approach. Why shouldn’t several hardware manufacturers produce the same type of chips? ARM is a successful model in this area. Unfortunately I don’t know enough about this topic to be able to go into more detail. But I think it is important to create a basis that can be trusted and that is not monopolized.
The public sector, i.e. administrations and authorities, should be obliged to use and develop open source software. Any solution developed with government funds must be freely and openly accessible and must be able to be operated without limits and free of license costs. The money currently spent on licenses for Windows in Europe alone should be enough to promote software development. Contracts for development and maintenance should in turn be awarded in Europe in order to stimulate and build up expertise and markets here.
Both education in dealing with IT and the transfer of knowledge through IT should be strengthened. Especially in times of Corona, the deficits have clearly become apparent. The needs for an educational infrastructure are certainly comparable throughout Europe, but there is no common effort to find a solution as far as I know.
In the area of learning materials there are now various solutions, for example in Germany Anton App, Musste Wissen or SimpleClub. But also internationally like Khan Academy. Nevertheless, each teacher prepares his lessons individually and experiences are not effectively shared. A better promotion of such content could improve both traditional and virtual teaching and thus position Europe better, as the level of education increases and becomes more comparable.
European citizens have repeatedly shown that they are innovative and recognize future issues. Renewable Energy is a good example of this, but at the same time it is also a bad example of the short breath in political support. In the field of mobility, Europe was at the forefront with fossil fuels by optimizing cars, but then missed out on developments, even though many innovations for the engines of the future also [took place] in Europe [https://de.wikipedia.org/wiki/Transrapid].
The great challenges of our time are also opportunities that must be taken advantage of. The potential is there, now we need the courage and passion to go our own way and to do so as quickly as possible.
Links to similar topics:
Published on August 26, 2020
]]>JSX became a defacto standard for mixing in XML markup in JS or TypeScript source files. With a little trick it can be used for quickly creating DOM elements or for templating.
The following snippet can be dropped into a JSX file and will then make a HTMLElement of the XML markup:
var React = {
createElement: function (tag, attrs, children) {
var e = document.createElement(tag);
// Add attributes
for (var name in attrs) {
if (name && attrs.hasOwnProperty(name)) {
var v = attrs[name];
if (v === true) {
e.setAttribute(name, name);
} else if (v !== false && v != null) {
e.setAttribute(name, v.toString());
}
}
}
// Append children
for (var i = 2; i < arguments.length; i++) {
var child = arguments[i];
e.appendChild(
child.nodeType == null ?
doc.createTextNode(child.toString()) :
child);
}
return e;
}
}
The same approach could be used to directly generate a HTML string for templating.
It is then easy to do something like:
document.querySelector('#menu').appendChild(
<li class={active ? 'active' : false}>
{title}
</li>
);
Published on August 11, 2016
]]>I’m still on Objective-C but I like the idea of swift having the #keyPath(property name)
String-Expression. Less can go wrong and autocompletion and refactoring also works.
In good old Objective-C we only have @selector
doing similar things, but nothing that works with key paths. So here is a little trick that I use to fix this problem in my code. First I define this macro:
#define keyPath(k) YES ? @#k : (k ? @"": nil)
Actually the @#k
part is already doing the job, but refactoring and autocompletion does not work, if the argument isn’t also treated as an expression, therefore I added the little ?:
dance.
Now you can use it like this:
[self valueForKeyPath:keyPath(self.sample.greeting)];
It is important also to add the self
in front. If self
is not appropriate in your situation because you are using the key path from another origin, then use this slightly extended version:
#define keyPathFromObject(o, k) YES ? @#k : ((o ?: o.k) ? @"" : nil)
Once in place it will not be missed on refactoring any more:
Source Code
Demo code is available as Gist. I’m not the first one having this idea, here and here and here are examples of prior works on the topic.
The macro was always evaluating the expression. I modified it to never reach the expression part of the code.
The previous implementation did only work for NSString
properties, fixed that by adding another ?:
round.
Haha, I have been blind and reinvented the wheel: This implementation is really nice. They even figured out how to prepend the @
to get @keypath()
. I’m happy to see that the implementation is similar although it is full of extra super magic macro power. That said maybe my implementation is still good to take a look at due to it’s compactness ;)
Published on June 20, 2018
]]>In complex projects, the time usually comes when it becomes indispensable for quality assurance to take further measures. I would like to discuss some of these aspects in a loose series:
In this article I’ll take a look at Logging.
The simplest form of a log entry is a message. But it soon becomes clear that this alone will not be enough and a little more context is needed. Usually, the Timestamp and Level are added quickly. This can be implemented in this way, for example:
log.error()
log.warn()
log.info()
log.debug()
In environments like the browser these can be easily filtered, but in a log file it may look different. That is why I like to resort to a trick that I learned from my colleagues when working on a major project, namely to highlight these levels:
E|***
W|**
I|*
D|
Okay, that looks nice and clear, but what’s better?
It is the possibility to filter, because filtering to |*
will show all messages including and above I|*
, thus also warnings and errors. |**
and |***
filter accordingly on higher levels. This works very well on macOS e. g. in Xcode or the Console App.
A log entry usually starts with a time stamp. This makes sense for a program that runs in full productivity at the customer or on the server. During development, however, this information is rather superfluous and also shortens the visible area of the message. If time is of any interest at all, then it’s more like how much time has passed until this or that happens. Therefore, it may make sense to specify the time since the start of the program:
I|* 0ms App launch
D| 123ms Open file abc.xyz
E|*** 412ms Could not open file abc.xyz because: Does not exist
Another aspect that is helpful in the quick perception of information is the color of the entry. The browser does this for us by displaying the errors in red. But in some environments, such as Xcode, the use of colors is not possible or difficult to achieve, emojis can be used as an alternative:
🔷 I|* 0ms App launch
◽️ D| 123ms Open file abc.xyz
❌ E|*** 412ms Could not open file abc.xyz because: Does not exist
The red symbol spotlights immediately and the error message is located so quickly.
Okay, we noticed there was an error, but where did it occur? The most amazing log entry is useless if we can’t locate the cause. For this reason, the file name and the line number should be logged as well. Many IDEs allow you to jump directly to a location through a specific formatting. In Xcode e. g. by CMD + SHIFT + O
and then specify the file and the line number separated by a colon. This could look like this in the log:
🔷 I|* 0ms <main.c:33> App launch
◽️ D| 123ms <AppDelegate.m:54> Open file abc.xyz
❌ E|*** 412ms <AppDelegate.m:62> Could not open file abc.xyz because: Does not exist
But! This is no information that should be used in a productive environment.
And yet another feature is important in modern programs, namely whether the message comes from an asynchronous code block or the main thread. A visualization with Emojis can be helpful here as well, like here with a rocket:
◽️ 🔷 I|* 0ms <main.c:33> App launch
🚀 ❌ E|*** 412ms <MyCache.m:12> Expected files were missing
Even such an everyday topic as logging can still offer room for optimizations and thus save time in some places, because relevant information can be collected quickly and it is easier to locate the origin of the problem.
Published on January 25, 2018
]]>Along with selling a product there comes support. Customers have questions and demands about the software and those are often repeating. This is where replies.io comes in, a service provided by my friends Stefan Fürst and Lars Steiger, originally initiated by Ruben Bakker.
Support can be time-consuming and easily eat up some hours of your work day. But at the same time it is usually not that much work, that it is worth outsourcing it, especially when you take into account that most of the answers still need your attention, since the support person might not be able to overview the specific technical details.
Therefore, it makes sense to cut down the time spent on support by automating and simplifying the tasks. This is what replies does:
First of all the users need to be able to get in contact with you. The most obvious way is through email. But what if you could also offer a support form that is already trying to cover the most frequently asked questions by analyzing what the user is typing? Replies.io has those for in form of web forms and a macOS framework. The later one is magic and provides even more benefit, since you get details about the OS, the installed version. The user can also send log files and screenshots and -recordings with a few clicks.
But even if the user did not find the right answer to his question in the FAQ or the suggestions, you still might already have answered a similar question. While typing your answer you usually get a proposal for an answer that fits well. This gets even better over time, with a growing set of answers.
If you are somewhat like me, you will fiddle with the look of the text a bit, but at least you’ll try to get the quoted text separated from your answers nicely, which can be a pain in Apple Mail. With replies.io you click at the last word of the sentence you like to answer and the text box show up waiting for you to type. The best is, it will generate a perfect looking email for you. It also has the personalized salutation and the footer ready for you.
One essential trick to save time is to slow down the conversation. In a regular email app, the mail is out when you hit “send”. Replies.io instead has some smart presets for deferring the sending. This is useful, because if you answer very quickly, the user is very likely starting a conversation, since he feels like being in a “chat mode”.
Also consider weekends. You might have time to answer a question, but you shouldn’t send it out directly to avoid an unprofessional impression. You also don’t want to get more mails on your weekend from the user. The user will still be super happy to get the answer early Monday morning and you saved your weekend.
And last but not the least replies.io integrates with Hockey App. It is super useful for the user to get a feedback and understand that you are working on a fix. It is also super useful for the developer to start a conversation of what did lead to the crash and finally have the user test the fix of that crash.
You can also learn to save time when you understand where you did spend it. Replies.io has reports and counters that help you understand the problems users have. The consequence could be, that you modify your app and avoid questions in the first place. You also learn about the features the users desire the most and invite them to become qualified beta testers for these new features before they go public.
Conclusion: Replies.io helped me to get support under control and save a lot of time making use of the described features. I will not miss it anymore.
Published on January 16, 2018
]]>Often I’m finding myself in a situation, where a lot of nonsense repeating code needs to be written, like for example when implementing NSCoder
for an object or when the syntax does not appeal me like keyed subscription for a dictionary like object.
Since I’m still coding in Objective-C, I tried to find an easy solution for these requirements:
NSString
and NSNumber
NSArray
or NSDictionary
propertiesobj[@"name"]
I would like to write obj.name
I know there is a lot of prior work doing similar magic and also CoreData comes with similar features, but hey, sometimes it is fun to just do it yourself.
So I started with a base class SeaObject
derived from NSObject
. This is what the header looks like:
@interface SeaObject : NSObject <NSCopying>
@property (nonatomic, assign) BOOL needsSave;
@property (nonatomic, readonly) NSUInteger count;
@property (nonatomic, readonly) NSEnumerator *keyEnumerator;
@property (nonatomic, readonly) NSArray<NSString *> *allKeys;
@property (nonatomic, copy) NSDictionary *jsonDictionary;
- (instancetype)initWithDictionary:(NSDictionary *)dict;
- (void)configure;
- (id)objectForKey:(id)aKey;
- (void)setObject:(id)anObject forKey:(id<NSCopying>)aKey;
- (void)removeObjectForKey:(id)key;
- (void)setObject:(id)obj forKeyedSubscript:(NSString *)key;
- (id)objectForKeyedSubscript:(NSString *)key;
- (BOOL)writeAsJSON:(id)path;
- (BOOL)readAsJSON:(id)path;
@end
If e.g. I want to build a todo list I can do like this
@interface TodoItem : SeaObject
@property NSString *title;
@property NSNumber *done;
@end
The implementations requires some @dynamic
declarations in order to get through to my fallback algorithms I’ll describe later:
@implementation SeaObject
@dynamic title, done;
@end
Now I can nicely set properties like this:
item.title = @"Clean kitchen";
item.done = @NO;
On the implementation side of SeaObject
all data is stored in NSMutableDictionary *_properties
. The glue code for the keyed subscription part is trivial.
The magic is in the code that handle the access to the properties. Since we used @dynamic
before, there is no counterpart on the implementation side for those properties. This is why we can override some fallbacks and voila everything ends up in setObject:forKey
and objectForKey:
- (NSMethodSignature *)methodSignatureForSelector:(SEL)selector {
NSString *sel = NSStringFromSelector(selector);
if ([sel rangeOfString:@"set"].location == 0) {
return [NSMethodSignature signatureWithObjCTypes:"v@:@"];
} else {
return [NSMethodSignature signatureWithObjCTypes:"@@:"];
}
}
- (void)forwardInvocation:(NSInvocation *)invocation {
NSString *sel = NSStringFromSelector(invocation.selector);
if ([sel rangeOfString:@"set"].location == 0) {
sel = [NSString stringWithFormat:@"%@%@",
[sel substringWithRange:NSMakeRange(3, 1)].lowercaseString,
[sel substringWithRange:NSMakeRange(4, sel.length-5)]];
id __unsafe_unretained obj;
[invocation getArgument:&obj atIndex:2];
[self setObject:obj forKey:sel];
} else {
id obj = [_properties objectForKey:sel];
[invocation setReturnValue:&obj];
}
}
Some more magic reading and writing dictionaries and this nice little helper is doing its job. Also adding categories works nicely.
Source Code
You get the full source code at GitHub.
Please leave your comments below. I’m looking forward to your feedback.
Published on May 8, 2018
]]>I am currently working on a new storage for my Objective-C apps and I wondered if there is significant difference in speed and size of various serialization methods popular on macOS and iOS.
On board with Foundation we already got some great ones:
NSJSONSerialization
NSKeyedArchiver
NSPropertyListSerialization
in the two flavors XML and BinaryAs the newcomer I chose:
MPMessagePackWriter
implementing MessagePackI know, there are others like BSON, Thrift or Avro, but I want to keep it flexible on my side and not define any schema beforehand. I also have in mind, to use the format cross platform, which should not be a problem with JSON and MessagePack. I left the other ones in the test anyway out of curiosity, but they are not the winners anyway as we’ll see later ;)
I wrote a little Unit Test to perform my non representative tests. I then implemented performance testing parts and a size comparison one. You can take a look at this gist to see what I did.
I also chose a very small test object and a very large one. As I said, this is really not very representative, but it might give an idea.
Finally, I also added GZIP into the mix, just to see if I was over optimizing for my problem.
For the small test MessagePack is the winner. JSON is also doing very well. GZIP is not playing a big role for small size, it is making things even worse, but that was to be expected. Even though XML was expected to be large, I wondered that KeyedArchiver is too.
For the large sizes GZIP really makes a difference. Of course there are a lot of repetitions for the property names which should make a nice target for compression.
But then again MessagePack is the winner and needs almost half as much space as the looser in this race does. But the distance to JSON again is not that big.
A very strange observation is, that the Binary variant of Plist is even worse than XML.
Green stands for tests on the small data and blue for large one:
For smaller data JSON seems to be the fastest one, followed by Message pack. For larger ones Plist is faster. Keyed Archiver is the slowest one in the field.
Overall for my personal purposes JSON and MessagePack seem to be the most appropriate ones. I was very positively surprised of the JSON results. MessagePack as the clear winner in the size comparison is probably the best choice for the projects I’m working on.
I was very disappointed of KeyedArchiver, which I previously expected to be in the top field. If not required for Apple OS specific purposes it really does not make sense to use any of those proprietary formats anymore.
Published on February 3, 2018
]]>For my current project zutun.io I’m implementing the reordering of entries as described in the Apple Documentation.
The goal is to perform as few operations on the database as possible, therefore I use a floating point number as the sort property. The basic idea is simple:
To put an entry between ordering numbers A and B choose a number that is greater than A and smaller than B.
The trick is to choose a random number to avoid conflicts when synching the data later on.
First of all lets create a little helper that will create random floating numbers in a range:
#import <math.h>
static inline double hxRandomDouble(double min, double max) {
double _min = MIN(min, max);
double _max = MAX(min, max);
double _rnd = ((double)arc4random() / (double)(UINT32_MAX-1));
return _min + ((_max - _min) * _rnd);
}
Lets say our entries are defined like this:
@interface TodoRecord : NSObject
@property NSString *title;
@property NSNumber *order;
@end
In our view controller we’ll store the entries in the property objects
. Adding a new entry is then as easy as:
TodoRecord *rec = [[TodoRecord alloc] init];
rec.title = title;
NSNumber *max = [self.objects valueForKeyPath:@"@max.order"] ?: @1;
rec.order = @(max.doubleValue + hxRandomDouble(1., 2.));
This will create a new entry with a distance of at least 1
to the previous one.
Now comes the tricky part. We will first allow reordering in general, which only makes sense if we have at least 2 entries. I consider the rest of the editing code as trivial, like calling setEditing:
etc.:
- (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath {
return self.objects.count > 1;
}
The following code is the heart of the code that applies the movement. It is basically the implementation of the described idea. It will just do one database operation and sync nicely.
- (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)sourceIndexPath toIndexPath:(NSIndexPath *)destinationIndexPath {
NSInteger src = sourceIndexPath.row;
NSInteger dst = destinationIndexPath.row;
// Nothing to do
if (src == dst) return;
// Move the entry in the representing NSMutableArray, see Apple Docs
TodoRecord *rec = (id)[self.objects objectAtIndex:src];
[self.objects removeObjectAtIndex:src];
[self.objects insertObject:rec atIndex:dst];
// Find the new neighbours of our entry
NSInteger len = self.objects.count;
TodoRecord *beforeRec = dst - 1 >= 0 ? (id)self.objects[dst - 1] : nil;
TodoRecord *afterRec = dst + 1 < len ? (id)self.objects[dst + 1] : nil;
// Find the range for the new ordering number
double before = beforeRec.order.doubleValue;
double after = afterRec.order.doubleValue;
double current = rec.order.doubleValue;
// At the ends add some margin
if (!beforeRec) before = after + 2.5;
if (!afterRec) after = before - 2.5;
// If not alread inside of the range...
if (before < current || after > current) {
// ... put somewhere middleish
double dist = (before - after) / 3.;
rec.order = @(hxRandomDouble(after + dist,
before - dist));
}
}
There may be theoretical limits for this implementation, but the complications are minimal compared to the benefit of the algorithm.
You would like to comment on it? Send me a message on @holtwick
I made some tests to see when the first collisions will occur. For the example above this will usually be around 50 steps. That means you can move items 50 times between a specific pair of entries before the order number will converge to identical values.
I then tried the same with int32_t
and it turned out it reached about 30 steps without collision, which makes sense for a 32bit number divided by 2 in each step ;)
So sadly infinity in reality isn’t to far away. But I still think that the edge cases will not often be reached and then it is still enough time to do apply the classic approaches again and have room for another 30-50 rounds.
Published on February 27, 2018
]]>This blog and website consists of static pages created using a practical technique that I would like to introduce in this article.
Update 2018-02-15
The project described here can be downloaded from GitHub now.
The special thing about this is that a large part of this website generator consists of programming patterns, which are also used in dynamic websites via jQuery. A simple example says more than a thousand words:
const site = SeaSite(
'public', // Source folder
'dist'
) // Destination folder
site.handle('index.html', ($) => {
$('title').text('New Title')
})
This example creates the site
object with the source directory public
and the target directory dist
. In the first step, the content of the source directory is cloned into the target directory. The next step is to edit the file index.html
. The help function gets the variable $
known from jQuery and sets the content of the title
element to New Title
. The modified content is saved automatically by the framework.
From here it is easy to build more complex websites with a few lines of code:
site.handle(/.*\.md/, (content, path) => {
const $ = site.readDOM('template.html')
const htmlPath = path.replace(/\.md$/, '.html')
const md = parseMarkdown(content)
const title = md.props.title
$('title').text(`${title} - My Website`)
$('#title').text(title)
$('#content').html(md.html)
site.write(htmlPath, $.html())
})
template.html:
<!DOCTYPE html>
<head>
<title>Template</title>
</head>
<body>
<h1 id="title">Title</h1>
<div id="content">Content</div>
</body>
hello-world.md:
---
title: Hello World
---
Lorem **ipsum**
This example uses a file pattern to find all Markdown files in the site’s source folder. As you may notice, this time we get a plain string instead of the DOM object from the previous example. This is because DOM objects are only generated from html
and xml
files, otherwise a string is returned.
We then directly create a new DOM object from the template.html
file. There we set the content of the title
element as well as for the DOM element with the ID #title
. The title is extracted from the Markdown file, where we could put even more properties like e.g. language, description, keywords.
The Markdown parser “marked” we use, converts the contents to an HTML string we can pass to the #content
element in out template.
The last step is to write the file with a .html
suffix. We don’t need the Markdown files anymore and could clean up by calling site.remove(/.*\.md/)
.
This little script will be applied to all Markdown files in the site’s source folder, so you can quickly build up a site with easy to create content. The CSS selectors are super powerful and changing other aspects of the page is super simple and intuitive.
But it doesn’t stop here, let’s push it a bit further! Lets use JSX to generate portions of HTML that need to be even more flexible. Let’s imagine we want to create an index of all Markdown files we converted in the previous example:
const pages = []
site.handle(/.*\.md/, (content, path) => {
// ...
pages.push({ htmlPath, title })
})
site.handle('index.html', ($) => {
$('#content').html(
<ul>
{pages.map(page => <li><a href={page.htmlPath}>{title}</a></li>)}
</ul>
)
})
We enhanced the previous example by collecting page info in the pages
variable. After all Markdown pages are processed the links should be added to the index.html
file. We use JSX to create a simple list with links. This is the same code you would use in a React JS project, but of course this is a custom JSX generator, which creates an HTML string form the JSX code.
This way you don’t need any complex templating language in the HTML file itself to get things done.
All this is made possible by the awesome cheerio project, which is driving the DOM and jQuery like part. The API is covering everything you’ll need to manipulate the HTML and XML files.
Published on December 30, 2017
]]>I love Typora for editing Markdown. It really unites everything I ever expected from an editor of that type. Of course, I’m writing this article with Typora as well and process it with my static website builder SeaSite.
Although it already comes with great themes for different editor requirements, I’d still love to get a preview of the content that is as close as possible to what the later published text will look like.
And as to be expected Typora lets you customize the look and some feel by simply dropping in a CSS file. That is what I did, I got to the theme folder via the apps preferences and added a symbolic link to my custom CSS:
ln -s <path_to_my_css> holtwick.css
After a restart it will appear in the menu like this:
Perfect! Now to reload the style after any update to the CSS I just select the theme again from the menu and it will reload.
Of course, I wanted to reuse as much of the CSS I already have. To achieve that I’ve split up the CSS into some LESS files. In text.less
I put the basic text styles like h1
, blockquote
etc.
I then use two more LESS files, one for the website and the other to be linked with Typora:
// Website
.blog {
@import "text";
}
// Typora
#write {
@import "text";
}
As you might notice, the Typora stuff needs to be wrapped into #write
to not unintentionally affect other areas of the window like the navigation.
You can the add some fine tuning to it, like adding styles for .md-meta-block
which would be the YAML property area.
And here is the final design, attention, it might be a deja vu ;)
I hope you enjoy Typora as much as I do and I send a big compliment to its author from my side.
Published on March 21, 2018
]]>After a few years, it was time to update the website. For the last version, I had even developed my own static website generator called Hostic and everything went wonderfully. But even though I love reinventing the wheel, I only did it halfway for this new attempt.
Basically, this website consists of Vue and Markdown. A static version of this is created so that the corresponding page with content is available for each URL. This allows the website to be easily indexed by search engines and the loading time is very short. The process is called SSG (Server-Side Rendering).
The highlight, however, is that this approach allows dynamic elements to be integrated into the website and even individual posts. A first example is the interactive registration for my e-mail newsletter, because I simply insert <AppNewsletter/>
at this point:
Another thing that makes things easier is the use of Obsidian as a Markdown editor for the content. I use this practical tool every day anyway and can thus easily maintain the content.
Obsidian Callouts
Obsidian’s Callouts are also supported. Also known as Github Alerts. This is made possible by the practical Markdown Plugin.
I derived my own approach from the great project Vitesse. The technique used is almost identical, but I had to make a few adjustments for my purposes.
Published on December 6, 2023
]]>