Archives 2005 - 2019    Search

If you lose all sound cards on Ubuntu, OSS configs may be the problem

published Dec 07, 2018 01:20   by admin ( last modified Dec 07, 2018 02:37 )
sudo apt remove --purge oss4-base

…may get your sound cards back on Ubuntu 18.04 and 18.10, if you've been dabbling with OSS.

So I tried to get my laptop's microphone to work, which seems to be a problem that needs to be fixed upstream, but I decided to give it a go. As part of this I installed some OSS modules and an interface to pulseaudio. Somehow "oss4" was involved. Anyway that did not work. I uninstalled it, and I tried a lot of other things to get my sound input to work.

And I had no soundcards anymore. The hardware was there but no kernel modules loaded. I spent hours on this until I found in the Debian wiki that old OSS drivers or configs may be lying around and preventing things to load. And looking into /etc/modprobe.d there were a number conf files for OSS, that OSS had left there after having been uninstalled. They still took dibs on hardware, which means it's not really just a config file; it's code interpreted by another sub system.

sudo apt remove --purge oss4-base

…took care of that. I did not find that in any Ubuntu documentation.

If you don't unload all OSS modules then ALSA modules will not be able to initialise (or work properly) because the OSS driver will be futzing with the sound hardware that the ALSA driver needs to control. If you see a message about "sound card not detected" and you are sure you have the right ALSA driver, the presence of an OSS module could be the reason.

ALSA - Debian Wiki


In the browser, stopPropagation() and stopImmediatePropagation() are not the same

published Dec 06, 2018 12:45   by admin ( last modified Dec 06, 2018 12:45 )

In fact stopImmediatePropagation stops more stuff, also any event handlers on the element you're calling it for, not just parent elements.

In the MDN documentation for preventDefault(), you can read this:

The event continues to propagate as usual, unless one of its event listeners calls stopPropagation() or stopImmediatePropagation(), either of which terminates propagation at once.

This can give you the impression that they are equal, which they are not. I just got out of a sticky bind by using stopImmediatePropagation() instead of stopPropagation(). Still  not sure as to why, but suspect it may have to do with promise chains, that were called on the line after the preventDefault() handler.


How to export a PDF or Google slide presentation to one image per slide

published Nov 13, 2018 06:10   by admin ( last modified Nov 13, 2018 06:38 )

I tried this and it worked (Linux).

Make sure gs (ghostscript) is installed, export from Google slides to PDF, and then do:

gs -sDEVICE=pngalpha -r1200 -o file-%03d.png in.pdf

Source: png - How to download Google Slides as images? - Stack Overflow

This is tested and verified by me. "-r1200" will force super high resolution, you can try without it for 100x or so faster rendering.

There also seems to be a way to script image export directly from inside Google Slides, se discussion at the above link.

 


Getting ntpd to work on a toy OpenBSD 6.3

published Oct 10, 2018 01:10   by admin ( last modified Oct 10, 2018 01:10 )

You need to set an interface for it to listen in /etc/ntpd.conf and you need to comment out the constraint on Google, because afaict OpenBSD does not trust one of the certs there. For more serious use you should tackle the root cause of the cert problem instead of course.

openbsd dev - bugs - ntpd with default config broken in -current

subject:"ntpd with default config broken in \-current"


Distribution independent application deployment systems for Linux

published Oct 09, 2018 10:25   by admin ( last modified Oct 11, 2018 11:13 )

There is:

  • AppImage,
  • Snap and
  • Flatpak.

AppImage seems to be a bit on the wane but seems to have a nice one-file put-anywhere file format. It can also run without any special components on the system. Snap is Ubuntu dominated and has sandboxing features built in. Flatpak (RedHat et al) also has sandboxing features, which features are also available separately as BubbleWrap.

Here some thoughts: Official Firefox is available in Snap from Snap's main repository. For flatpak there is a third party source, however a very reputable one. I like Snap's solution of having an official repo in between the developers and the machine, so that hopefully the application gets one more code review. Plus it's an official Firefox build in this case too.

Firefox is now available as a Snap package - OMG! Ubuntu!

Unofficial Firefox flatpak repository

An interesting thing is if AppImage could work on OpenBSD but no one seems to have tested it, or maybe it simply does not work AppImage for BSD · Issue #98 · AppImage/AppImageKit

I actually tried it now on OpenBSD with the appimage standard subsurface example app for Linux, and you get "Exec format error" with both the 32 and 64 bit version. You need to install bash to get that far and I also installed fuse but I think it croaked before that.

Ah: OpenBSD 6.0 tightens security by losing Linux compatibility | InfoWorld

Apparently there are big differences between OpenBSD and Linux:

  • Operating system syscall numbers
  • Method for calling syscalls.
  • OpenBSD passes all parameters on the stack while Linux does not

Linux chroot under OpenBSD /bin/bash: Exec format error
 

 

 

 

 


Deduplication and versioning file systems on Linux

published Oct 06, 2018 01:05   by admin ( last modified Oct 06, 2018 01:07 )

Summary: NILFS is the only one regarded as stable of the one's included in kernel in Linux. ZFS has a long history now also on Linux (outside of the kernel), RedHat is developing stratis and there is also bcachefs under development. RedHat is dropping support for btrfs. The author of bcacheFS claims Btrfs has low code quality. LVM can do deduplication if configured into thin pools.

Deduplicating file systems can create snapshots which on a deduplicating file system could just be folders. Two file system that can do snapshots and that are regarded as stable are:

  • ZFS— Has license problems on Linux, but it is possible to use. Needs 8GB of RAM or thereabouts for efficient deduplication.
  • NILFS—Regarded as stable on Linux, according to Wikipedia: "In this list, this one is [the] only stable and included in mainline kernel.". According to NILFS FAQ it does not support SELinux that well: "At present, NILFS does not fully support SELinux since extended attributes are not implemented yet

Another way of doing snapshots seems to be on a slightly higher level:

RedHat has decided not to support btrfs in the future and are working on something called Stratis, which is similar to LVM thin pools, built ontop of the XFS file system.

We may also get an alternative to btrfs on Linux with bcacheFS.


Cloud backup & sync for Linux, Android: Comparison table & a winner

published Sep 25, 2018 01:20   by admin ( last modified Jun 09, 2019 01:37 )

Updated 2019-06-19

Summary

Rsync + Encfs for simplest backups, Restic all-in-one solution and Syncthing for synchronization. At the end of the article there is a comparison and analysis of  Restic Vs Borg Vs Duplicati.

The comparison table below lists some info on:
DropBox, siacoin, tahoe-lafs, cozy, nextcloud, owncloud, seafile, perkeep, (camlistore), Sparkleshare, Syncthing, Rclone, Duplicati, Restic, Borg, Bup, Back in time, IDrive, Amazon Drive, Backblaze B2, Jottacloud, Rsync.net, Hetzner SB, upspin, safe network, Mega, OneDrive, storj, Google Drive, Tarsnap, Rsync+, Rsync.net/Hetzner, Rsync+, Rsync.net/Hetzner and EncFS/gocryptfs.

Simplest candidate: Rsync + Encfs + Hetzner/Rsync.com

The rather boring answer as the simplest winner is Rsync + Encfs for personal backup, with cloud solutions Rsync.net or Hetzner Box as snapshotting back ends, or your own server of course. Encfs and Rsync have been around for a long time. I have used rsync+btrfs previously for snapshotting backups so it is not a completely new solution for me. The new thing is using the cloud.

The only reason Rsync is even in the running, is because there are at least two specialized storage services that do rsync backups and snapshots them automatically on the server side. This server-side snapshotting means that the client cannot delete old backups, something the other cloud backup solutions run the risk of.

Encfs has some problems of metadata leaking in normal mode, but it has keystretching (meaning your password is not directly used but a more difficult one is made from your password).

Magic folder (synchronization) winner

The winner here is Syncthing. I have been using it now (2019-06-09) for a month on non-critical data, Works fine.The runner-up, or actually the other candidate, is Sparkleshare. Unfortunately Sparkleshare is not well documented and if using it through the GUI it runs into error conditions that it doesn't tell you about.

All-in-one encrypted remote backup winner

In the category "All-in-one encrypted remote backup winner" which comprises Restic, Duplicati and Borg, the winner is Restic. It has been somewhat cryptographically reviewed, and it is in the standard Ubuntu repos. It also has better key stretching than the competition.

Rationale for cloud backup

I've had some really nice personal backup services in place: rsync, rdiff-backup, Camlistore (now called perkeep), time-machine-like contraptions on Btrfs (jury is still out for me if Btrfs is good though, I gotta check what's happened given the error messages. Update: It looks like a hardware fault in the Ext4 system SSD disk, so no fault of Btrfs; it still runs!). Problem is that they have all relied on my own servers, and as life goes by these servers tend to break or be repurposed.

Lists

Comparison table

Preliminary list of software and service for backup,  file synchronization and cloud storage components that work on and with Linux.

  • "Linux/Android"— If it is available for Linux and Android. "Standard" for Linux means it is already in the normal Ubuntu repositories.
  • "Their storage" means that you do not need to configure your own storage server. They, or a third party, supply server storage for free, or for a fee. Ideally combined with "Least authority". If free, capacity is listed.
  • "Magic folder" Files are transparently updated between devices.
  • "Multi sync"— Sync more than two devices/machines to the same backup or set of files.
  • "Libre" means if the source code is libre. I haven't checked client libraries always.
  • "LA—Least authority" means that you're in control of the encryption on the client side, and servers can't break it (hopefully). Comes in handy if servers get hacked. Some refer to this as "end to end encryption", however that is slightly different in definition. "Zero knowledge" is also used as a term.
  • "Post check"—Means you can verify backups by sampling them
  • "Go back"— That you can go back in time, for example with snapshots or versions

There are most likely mistakes below. Some non-Linux services included so you do not need to check them out because I may have missed them. "Yes" and anything that is not a "No" in a cell is good (from my perspective).

Service Linux/
Android
 
Magic
folder/
schedule
Multi
sync
Their
storage
$ Price
1TB/yr
LA/
Key
Stretch
Post
Check
Integration
Frontend/Back
Libre Redundant Go
back
Conflict
res.
Lang/
last
comm.
Extra
Features
DropBox Yes/Yes Yes Yes 2 GB $120 No   Magic folder No No No Branch    
siacoin Yes/No ?   For pay   Yes/?   Nextcloud, FS Yes Yes       Crypto coin

tahoe-lafs

Yes/No Yes   Optional   Yes/?   Magic folder,
web, sftp
Yes m of n Yes Shaky Python m of n redundant
cozy Yes/Yes No   5GB $120 No     Server No No      
nextcloud Yes/Yes Yes Yes 10GB $420 Beta   WebDAV/ Yes     Branch PHP Sprycloud price
owncloud Standard/Yes Yes   Optional       WebDAV/ Yes     Branch PHP  
seafile Standard/Wobbly Yes   No   Yes/?   Magic folder/Stand-alone Yes     Branch C Android app keeps crashing
perkeep
(camlistore)
Yes/Upload     Optional       Stand-alone Yes Replicas Yes Branch Golang Content
adressable
Sparkleshare Standard/No Yes   No   Yes/?   Magic folder Yes No Yes     "Not for backups"

Syncthing

Standard/Yes Yes   No   No   Browser Yes   Yes Branch Golang,days, 3 "Not for backups"
Rclone Standard/No No No Optional       34 backends Yes No backend   No    
Duplicati Yes/No No No Optional   Yes/sha256   26 backends,
incl. Sia, Tahoe
Yes No backend Yes   C# minutes, 5  
Restic Standard/No No/No   Optional   Yes/Scrypt
e.g. n18,r1,p3
Yes B2, GDrive, S3, etc. Yes No Yes   Golang, days verifiable backups
Borg Standard/No No/No No     Yes/PBKDF2 Yes Stand-alone Yes No     Python, days Mountable backups
Bup Standard/No No                        
Back in time                            
IDrive No/Yes                          
Amazon Drive Yes/Yes     5GB $60 No                
Backblaze B2         $100                 Only pay what you use
Jottacloud Yes/Yes No   5GB $84                  
Rsync.net Yes/No       $480                 ZFS backend
Hetzner SB         $96     Borg, rsync, WebDAV            
upspin Yes/No                         Early stages proj.
safe network - -   -   -   - - - - -   Beta crypto coin
Mega Sync/Yes                          

OneDrive

No                          
storj (N/A) - -   For pay   Yes   - Yes Yes       Alpha Crypto coin
Google Drive Yes/Yes No   15GB $100 No   Browser No No No     Editors
Tarsnap Yes/No No   For pay $3000+ Yes         Yes     Deduplicates
Rsync+
Rsync.net/Hetzner
Standard/Yes No/No Yes No $480/$96 No No No Yes No Yes None    
Rsync+
Rsync.net/Hetzner
+EncFS/gocryptfs
Standard/Yes No/No Yes No $480/$96 Yes/PBKDF2 (Scrypt for GocryptFS) No No Yes No Yes None    

Other offerings (or lack of such)

  • For syncany, the team has gone missing… Maybe they have been bought to work on some well-funded solution?
  • Filecoin has been funded to the tune of $250 million dollars. I hope to see something produced from them soon!

What I'm looking for

I would like to have full redundancy, all the way from the device. I had this before with two independent systems: synology diskstation and rsync. Fully independent, all the way from the data. I did try to use obnam at one time, but it did not work for me in a reliable way.

Magical folder

It's probably not a good idea to have two different programs share or nest magical folders. I guess the update algorithms could start fighting. It therefore seems like a better idea to use one magical folder service, such as dropbox, and then apply one or several backup services on that magical folder using a completely different backup system. Or even different systems.

Versioning

Your data could be accidentally overwritten by user processes. In that case you want to be able to go back.

Quick restore

You want to be able to be up and running quickly again, both on user devices and get a new backup server up and running again.

Redundancy in backups

This means using different systems already at the client, and also monitor what is going on.

Somebody else's storage

I'd like to try to use remote storage services. One way of doing that more securely is to have things encrypted client side, something called "Zero knowledge" on e.g. Wikipedia's comparison page. I prefer the term "Least authority" which is the "LA" in Tahoe-LAFS.

Least authority

One way of establishing this separately is to use EncFS and backup the encypted version. An interesting way is to keep the encrypted folder with read/write rights to it so it can be used with a backup client with low privileges. A downside with EncFS is that you more than double your storage need on the client computer, unless you use the reverse mount option, which actually is pretty handy.

One guy has tested how well deltas work with EncFS, and the further back in the file the change is, the better it works. A project called gocryptfs seeks to perform faster than EncryptFS paranoia mode.

Some quotes from Tahoe-LAFS which are a bit worrying

It seems to be a really solid system, but as with all complex systems, the behaviour is not always what you'd like. Some quotes from their pages:

"Just always remember that once you create a directory, you need to save the directory's URI, or you won't be able to find it again."

"This means that every so often, users need to renew their leases, or risk having their data deleted." — If you do not periodically renew, things may disappear. If you perish, so does your data. Maybe you can set the lease to infinity?

"If there is more than one simultaneous attempt to change a mutable file or directory […]. This might, in rare cases, cause the file or directory contents to be accidentally deleted."

Deduplication and versioning file systems

It seems like a good idea to use deduplicating file system to create snapshots which on a deduplicating file system could just be folders. Two file system that can do snapshots and that are regarded as stable are:

  • ZFS— Has license problems on Linux, but it is possible to use. Needs 8GB of RAM or thereabouts for efficient deduplication.
  • NILFS—Regarded as stable on Linux, according to Wikipedia: "In this list, this one is [the] only stable and included in mainline kernel.". According to NILFS FAQ it does not support SELinux that well: "At present, NILFS does not fully support SELinux since extended attributes are not implemented yet

Another way of doing snapshots seems to be on a slightly higher level:

RedHat has decided not to support btrfs in the future and are working on something called Stratis, which is similar to LVM thin pools, built ontop of the XFS file system.

We may also get an alternative to btrfs on Linux with bcacheFS.

Cloud backups — Rsync, borg, restic or  duplicati?

For cloud backup purposes it has narrowed down to four choices of which I may deploy more than one:

Rsync

rsync — Rsync can  work with the rsync.com site . Overall a simple and time-trusted setup. They use ZFS on their side and they do snapshots, and you can decide when those snapshots are happening. It can be a bit expensive though. The setup with rsync.com would be very similar to the setup I already have for local backups, with rsync to btrfs snapshots. However push instead of pull. It should also work fine with my phone with Termux or Syncopoli. No scheduling built in. Hetzner box is a cheaper alternative that does the same as rsync.com, although probably less reiably, which they are open about.

+ simple, tried and trusted. Available by default on all Linux distributions.

+ with rsync.com, the client cannot overwrite old backups. This is a truly big point!

+ There are any number of rsync clients for Android, such as Syncopoli.

- no scheduling

- you need to learn rsync syntax (I already know it though)

- No encryption. Although it may be a benefit if you use a good complement. Question is, what is? There is EncFS and a new competitor gocryptfs.

Borg

I had given up on Borg, since it needs a Borg back-end until I found Hetzner storage boxes. These work out of the box (pun intended) with Borg. However do I want to learn yet another configuration language?

Restic

Restic — Restic seems to get the nod from some very intelligent programmers, check for example this review by Filippo Valsorda. However it has no scheduling or process management of backups. That is kind of important, also in the respect of recovering from errors. But maybe the other alternatives have not put too much work into that anyway?

The parameters for scrypt in restic are something like,"N":262144,"r":1,"p":3. This is on the low side, consuming only about 32 MB RAM I believe. Restic is set up to read whatever values of these parameters so if you feel adventurous you can change the key files in the repo to higher values and make sure you know what the answer is of course.

+ In Ubuntu repo

+ Liked by some smart people

- No scheduling

- Need to learn the language

Duplicati

duplicati — This also comes recommended, however it is the only one of these four that is not in the Ubuntu repositories, and it has slightly less glowing reviews than restic. Currently one version, version 2.0, is in beta and the old version, 1.3.4, is not supported anymore. That is in itself weird.

+ Great user interface

+ Includes scheduling. The only one that does so of the shortlisted candidates

- Not in Ubuntu repos

- Keystretching is there but not as well-implemented. See next section for more info.

How good is the client side encryption & keystretching in EncFS, GocryptFS, Borg, Duplicate and Restic?

There are at least three components here:

1) The encryption used. They all use AES but there might be subtle differences.

2) Overall design and leaking of metadata

3) Keystretching. Passwords I believe can often be the weakest link, and some good keystretching could mitigate that.

Encryption

They all use AES although Borg is thinking of Chacha20, not sure if they have implemented it?

Keystretching

Of the techniques used by the components, the best one is Scrypt as long as it uses enough memory, followed by PBKDF2, and then after that at the last place applying sha256 over and over again.

Scrypt is used by Restic and GocryptFS.

Duplicati uses sha256 8192 times keystretching if you use the AES option. A sha256 miner could do short work of that, I guess, evaluating password candidates in parallel. There is however also a gpg encryption option. Not sure why they use sha256 8192 times, seems like a subpar choice. It can use OpenPGP too, and in GPG libraries there is a GCRY_KDF_SCRYPT option, not sure how much it is used though: https://www.gnupg.org/documentation/manuals/gcrypt/Key-Derivation.html. I can see no mention on the web of using scrypt for generating keys in GPG, so I'm not sure it can even be used in practice.

Borg uses PBKDF2.

EncFS uses PBKDF2 for whatever number of iterations that take 0.5 seconds, or 3 seconds in paranoia mode.

Overall design and leaking of metadata

Taylor Hornby has made audits of both EncFS and GocryptFS, the latter here: https://defuse.ca/audits/gocryptfs.htm

EncFS has some problems with leaking metadata that are widely known. But leaking metadata about files may not be all that bad for my use case?

Restic has actually been reviewed (sort of) by a cryptography specialist (Filippo Valsorda) and he gave a thumbs up, if not an all clear. It also has keystretching which I see as a requirement more or less! It uses scrypt for keystretching which I think is a good choice as long as you're not inside the parameters of a scrypt miner. It encrypts with AES-256-CTR-Poly1305-AES

And the winner is

Rsync

Rsync only wins because at least two storage providers have provided snapshots for it.

Addendum 2019 - file synchronization

Suddenly in 2019 I now have the need for synchronization between laptops. Here are some initial notes from the research I do now:

Sparkleshare - seems to use a bog standard git repository as back end, great! This ought also mean that there is a git repository on the front end side. This means that recovery from a botched central repository ought to be easy. The client is in the Ubuntu repositories. Encryption does exist but merges will always fail, which is understandable. Encryption is a symmetric key that cannot be changed later. The obvious step here would be to encrypt that key with an asymmetric key, which it seems they haven't thought of, which in itself may indicate a not completely thought through process. After installing on Ubuntu, there is no man page and basically no command line help. One thing to remember is that the ".git" suffix needs to entered for connecting to gthub and bitbucket. It does not give any warning, just churns while doing nothing if you enter an incorrect repository url.

On Ubuntu 19.04 you get a generic binary installed with, not sure if it's flatpak or snap. This means that the config files are in ~./config/org.sparkleshare.SparkleShare and not under e.g. ~./config/SparkleShare. Sparkleshare seems to identify a computer by the ssh public key used. It may be that this precludes using the same key for more than one computer.

Overall, the documentation is lacking for SparkleShare and when it is trying to connect to a repo it gives absolutely no information on what it is doing. In fact clear unrecoverable errors that you would see running from the command line give no communication through the GUI.

Syncthing - seems to be truly decentralized, relying on a cluster of discovery servers. However those servers seem to be shared with others. Is that desirable? The client and discovery servers are in the Ubuntu repositories. Encryption does not seem to be supported. However if files are never stored outside of your own machines, this may actually be moot. It seems relay servers, that are needed if at least one of your machines is firewalled, must be public. Or actually it is not clear, reading different things on GitHub. I guess ssh tunneling to a relay server from all involved parties could take care of the possibility to run private. Maybe even using a small VPS somewhere for that job.

Seafile - custom made servers but comes with high recommendations on the selfhosted subreddit. The client is in the Ubuntu repositories but not the server. Encryption is a symmetric key that most likely cannot be changed later. That key is in its turn encrypted with a user password that is keystretched (with PKDF2 and not scrypt but you cannot get everything). On the whole the encryption workflow indicates a thought-through process, as compared to e.g. sparkleshare.

Nextcloud - custom made server but comes with high recommendations on the selfhosted subreddit. The client is in the Ubuntu repositories but not the server.

It's probably not a good idea to run more than one of these on a set of files. Although with filters, you may use different ones for different files in the same dirctories, come to think of it.

I guess I will just need to install three or four of them and see how they perform! Sparkleshare is a no-brainer here since I can just get a git rep running in notime, so in fact no server setup phase!

Syncthing is p2p and goes point to point and if that doesn't work it (which due to NAT it often does not) it relies on a public cluster of relay servers, or you can run your own if you do not trust that the public servers are unable to read the encrypted traffic.

However the p2p nature of Syncthing becomes a bit of a problem if you want to sync between your own devices, because obviously the sync can only work if the machines are switched on and online at the same time. For your laptops, this is unlikely. And hence Syncthing does not work for that scenario. Unless you have an always on machine also in the mix (you can sync many machines, not just two).

But what do we call a machine that is always switched on? Yup, a server. Although the system would be robust since if you lose that machine you can just fire up another one and everything works again.

Still Syncthing feels like it is more for synchronizing files between people. And there git may be a contender. Still Syncthing looks great and I will see if I can tailor it to my needs. Worst case scenario I'll put two servers in the mix, one for relaying and one for making sure synching always works!

That would be two-server serverless architecture :) But with great resilience since the servers can be replaced at any time.


Handy table for memory, speed & recommended values for scrypt, argon2

published Sep 05, 2018 08:35   by admin ( last modified Sep 05, 2018 08:35 )

Policy

published Sep 05, 2018 02:55   by admin ( last modified Sep 05, 2018 03:28 )

"It's amazing China's finding out that its financial and capital control policy boils down to its firewall policy" said Ravikant

 

https://twitter.com/jeorgen/status/909921703970721792


Turn off closing brackets in VS code

published Aug 19, 2018 12:50   by admin ( last modified Aug 19, 2018 12:57 )

This was driving me nuts, with VS code from my perspective randomly adding a closing bracket with complete disregard to context. And moving the cursor to boot. Add this to your preferences:

"editor.autoClosingBrackets": false

Adding a closing bracket while the user is typing, only seems to be a moderately good idea when you're typing new code. When editing code, which is by a factor of at least 5 more common that typing new code, adding a closing bracket makes no sense.

For example, say that you have a javascript one-line arrow function that should return an object. This below does not work because javascript assumes that the curlys brackets delimit a function:

foo => {'bletch':foo.blam}

Easy fix, just add brackets around the curlies:

foo => ({'bletch':foo.blam})

But of course VS code will do this when you type the "(":

foo => (){'bletch':foo.blam}

So you might end up with this:

foo => (){'bletch':foo.blam})

and that will not run and you wonder how you could have put in unbalanced brackets. But of course you didn't, VS code put one in for you.

 


How to call a promisified function in-page with Nightmare/mocha/chai

published Aug 17, 2018 07:23   by admin ( last modified Aug 17, 2018 07:23 )

Two important things to keep in mind:

  1. You have to return from inside the "it" function (boldfaced below)
  2. You need to list parameters to evaluate three times, the third time as a list of parameters after the function argument to evaluate (also boldfaced below)

    it(`Keystretching ${clearTextPassword} with ${hexNonce} with current params should give ${theExpectedDerivedKey}`, function () {
      this.timeout('10s')
      return Nm()
        .goto(`file://${__dirname}/../index.html`)
        .evaluate((clearTextPassword, hexNonce, keyStretchFactor) => {
          return stretch(clearTextPassword, hexNonce, keyStretchFactor)
        }, clearTextPassword, hexNonce, keyStretchFactor)
        .end()
        .then(res => expect(res.derivedKey).to.equal(theExpectedDerivedKey))
    });

 

 


Will crypto-currency driven insurance replace the concept of objective truth?

published Aug 14, 2018 11:35   by admin ( last modified Aug 14, 2018 11:43 )

Imagine a future where everybody believes in their own subjective "truth" and people cannot agree on facts. Some say that is where we are heading. Without facts, words don't mean much, whether in parliament or in law.

How would such a world be ruled? I can think of two ways. One is violence. Whatever you believe in, if you are threatened with violence you have no choice but to comply. It does not matter what you believe in. A kind of mafia driven governance. Probably in a hierarchy since otherwise it would be hard to channel.

But there is another way I think that a world could operate without people being able to agree on truth. And that is markets. First there needs to be a currency for the market. Even if people want to believe in let's say different currencies, some currencies will be clearly better than others. In fact with the aid of blockchains and crypto currencies we may get close to consensus on what currency to use, since believing in the "wrong" currency will be punished as that currency falls in value. So we have step one in consensus: We believe in the same currency.

Secondly, on a blockchain you can have a kind of insurance system, where money is staked, and vouched for something. And if that something misbehaves, the insurance may be triggered and that money being sent to someone else. This is essentially what is called a Third-party insurance.  Imagine for example if every person travelling needs to have a terrorist insurance, so that if they do something bad, their insurance company needs to pay out possibly on the order of billions of dollars to victims and next of kin of victims. This would mean that an insurance company would need to do due diligence assessing the risk of an individual before they give that person an insurance cover.

For a high-risk individual insurance premium might run in the vicinity of millions of dollars per week. And you can't travel without it. So such a person would be unable to move.

So truth comes from if someone is willing to vouch for you, and entities who have bad judgment about vouching will run out of money.

I'm not saying this is a desirable future scenario, but it may be a way to work around the fact that the future may not believe in facts.


Applying functions to parts of data in a promise chain using JSONPath

published Aug 09, 2018 03:50   by admin ( last modified Aug 13, 2018 03:17 )

When working with promise chains, you sometimes pipe through a bit more complex data than just a value, and would like to apply a pipeline stage to just subparts of that complex data. I used to have a function that did just that in python, but now it is time for javascript!

Here is one way of doing that in javascript that I just came up with. First what it looks like in practice, here we're are calling an uppercase transformation on just part of the data structure being passed through:

.then(forPaths('$..author', uppity)).then(

The above uppercases the value of any property named "author". With JSON path syntax we could have selected other parts too.

We are using a library called JSONPath to specify what parts of the data structure we would like to apply the function too. Here is a complete working example, using an example data structure from JSONPath's documentation:

const jp = require('jsonpath')

// Here is the general function for applying anything to any
// part of a data structure going through a promise chain.
// It is curried, so the data comes first when the pipeline is executed:

const forPaths = (pathExpr, fun) => data => {
  jp.apply(data, pathExpr, fun)
  return data
}

// some test data, taken from the docs for JSONPath:
var data = {
  "store": {
    "book": [
      {
        "category": "reference",
        "author": "Nigel Rees",
        "title": "Sayings of the Century",
        "price": 8.95
      }, {
        "category": "fiction",
        "author": "Evelyn Waugh",
        "title": "Sword of Honour",
        "price": 12.99
      }, {
        "category": "fiction",
        "author": "Herman Melville",
        "title": "Moby Dick",
        "isbn": "0-553-21311-3",
        "price": 8.99
      }, {
         "category": "fiction",
        "author": "J. R. R. Tolkien",
        "title": "The Lord of the Rings",
        "isbn": "0-395-19395-8",
        "price": 22.99
      }
    ],
    "bicycle": {
      "color": "red",
      "price": 19.95
    }
  }
}

// An example function for doing some kind of transformation, in this case convert to uppercase
const uppity = text=>text.toUpperCase()

// Finally, an example promise chain:
Promise.resolve(data).then(forPaths('$..author', uppity)).then(JSON.stringify).then(console.log)


 

 

 


Temperature/humidity over which we cannot exist

published Jul 28, 2018 06:20   by admin ( last modified Aug 01, 2018 09:34 )

There is a temperature/humidity threshold over which human life even at rest, isn't possible. It's 35°C at 100% RH, according to The Economist . The limit to what conditions we can live is can be measured with wet-bulb temperature. When it goes above 35° we cannot dissipate heat from our bodies fast enough, so we start to overheat. The temperature is measured with a thermometer wrapped in wet cloth, but can also be calculated/approximated from heat, humidity, sunlight and air pressure:

The wet-bulb temperature is probably not a very good predictor of the "feels-like" temperature for most common conditions, which is why it is not used for this. However, it can be used to establish an absolute limit on metabolic heat transfer that is based on physical laws rather than the extrapolation of empirical approximations. That is why we focused on it instead of the usual measures.

See:  What is Wet Bulb temperature?

The conditions under which humans cannot exist even in permanent shade, correspond pretty well to the bottom right empty white area in the chart below from https://arielschecklist.com/wbgt-chart/ :

CHART A CELSIUS 1024x822
Click to view full-size image…
Size: 221.1 kB

The reason they list temperatures above 35°C wet-bulb in the chart earlier, is I believe because they also factor in radiation from the Israeli sun, that is you're not in the shade. Obviously this is approximate, I'm not a specialist, but this chart for hikers seems quite handy. The black area indicates forbidden conditions for even healthy & relatively fit hikers. Go to their web site to get full information!

https://www.economist.com/science-and-technology/2018/07/28/heat-is-causing-problems-across-the-world

Calculate the wet-bulb temperature: http://www.climatechip.org/heat-stress-index-calculation

Today the highest wet-bulb temperatures recorded in the wild, are around 31° C. If we get up to 35°C or higher in some areas, those areas are likely to be depopulated.

The maths is simple: a 4°C increase in wet bulb values creates intolerable outdoor conditions, even in the shade in some areas. The Amazon and parts of India would be first, with northern Australia and other regions with very humid summers not too far behind. Simulations of warmer climates show that this happens if the average global surface temperature rises by 6°C

See: Heat Stress in a Warming World.

Here are maps showing max heat-bulb temperatures today, and if warming goes up 10 degrees Celsius. These images published in:
Sherwood, S. C. and M. Huber, An adaptability limit to global warming due to heat stress, Proceedings of the National Academy of Sciences, Vol. 107, 2010, 9552-9555, doi:10.107/pnas.0913352107. Reprint

Pictures taken from here.

twmax highresmap sm
Click to view full-size image…
Size: 373.3 kB

twmax highresmap hot sm
Click to view full-size image…
Size: 76.8 kB

 


How to get in-page attributions semi-automatically out of Flickr

published Jun 24, 2018 03:24   by admin ( last modified Jun 24, 2018 03:24 )

This is a stop gap solution.

First, add this to your CSS:

.attribution:hover .attribution-info {
    display: block;
}

.attribution-info:hover {
    display: block;
}
.attribution-info {
    display: none;
    background: #C8C8C8;
    margin-left: 28px;
    padding: 10px;
    position: absolute;
    z-index: 1000;

}

Secondly, execute this scipt on a Flickr photo page. I use tha javascript console, but I wonder if there is a way of making this into a bookmarklet:

window.prompt('Info', '<span class="attribution">(Attribution) <span class="attribution-info">Author: ' + $('.attribution-info>.owner-name').text+'<br>Source: ' + document.documentURI+'<br>License ' + $('.photo-license-url').href + '</span></span>')

Take the resulting html, and paste into your web page next to the photo.

Inspiration: http://jsfiddle.net/q46Xz/


Field watches is where it's at

published Jun 22, 2018 12:55   by admin ( last modified Jun 23, 2018 01:06 )

sinn1scaled2
 

I recently pondered if it would be time to wear a watch again…

In practice the adventurers, aviators, divers and soldiers that rendered credibility to certain watches and designs have now all but migrated to digital watches and computers. But analog watches tend to look better on your wrist.

5372909422 0d7f2cc5ba b
Click to view full-size image…
Size: 184.2 kB

A Seiko military watch (Attribution) Author: Brandon Cripps
Source: https://www.flickr.com/photos/brandoncripps/5372909422
License https://creativecommons.org/licenses/by-nc-sa/2.0/

When it comes to analog watches, there is basically two styles:

  • Dress/impress watches. Watches made to be aestethically pleasing and/or containing advanced functionality of a mostly useless nature. Although this may sound like a bit of a letdown, all analog watches fall into this group really, it's just that some pretend not to.
  • And for the second group: Watches once made for professional use. These are watches that once were the best you could have to help you in your work, whether you were a diver, an aviator, astronaut, adventurer or car racer. Today, you can get a watch model that used to fulfill one of these roles, or something that is inspired by such watches.

Strictly speaking, there is one group of analog watches that stay relevant from e.g. a prepper perspective: Mechanical watches that can withstand strong electromagnetic pulses (EMPs) such as caused by nuclear explosions. Interestingly, many of the more advanced dress-to-impress watches fall into the EMP tolerant category. It might be that even electronic watches would escape at east some EMPs; it's hard to find good info on the Internet.

3116404826 2c6aa0a6a9 b
Click to view full-size image…
Size: 404.2 kB

A Luminox field watch

I started browsing through the offerings online and in stores. Soon it became to clear to me what I wanted from an analog wristwatch:

Quickly readable

Quickly readable — A readable watch face, that is a watch face that gives you the time with the least amount of attention needed to know the time. This means:

  • Readable in darkness — Glow in the dark markings or glow in the dark watch face
  • Arabic numerals — The markings should be numbers, otherwise I could just use a compass and the sun, thank you very much (slightly exaggerated)!
  • Uncluttered — No extra decorations such  as:
    • No outer ring with diver's markings
    • No complications (complications are extra dials)
    • No date indicator, I can get that from my phone more reliably

 

19309888151 26a1267776 z
Click to view full-size image…
Size: 131.2 kB
A very readable Laco "1925" (Attribution) Author: Daniel Zimmermann
Source: https://www.flickr.com/photos/callmewhatever/19309888151/
License https://creativecommons.org/licenses/by/2.0/

Other requirements

Some other things crept in to make the watch practical for me.

  • A watch body made out of titanium
  • A battery operated quartz movement, preferrably solar so I do not need to change batteries
    • Mechanical watches on the other hand get mechanical problems and hence need maintenance
  • I'm not going to spend a fortune, the threshold being around $250
  • As I learnt more, and realized that analog is all for show, I actually softened a bit on the design side. The watch I finally purchased is perfectly readable, but a bit in a 1940s retro style. There is a picture of it near the end of this post.

Field watches, aviator watches and diving watches

It turned out that there is genre of watches that ticks most of the above criteria: Field watches. Field watches belong to a category of watches for people who need to get the time quickly and reliably under field conditions. In this category you can also find diver's watches which have a rotating bezel for keeping tabs on oxygen supply and aviators' watches which have an outer ring for, for, I'm not actually sure what that ring is for (update: It is a slide rule!)

 

3948376935 dd847ee351 b
Click to view full-size image…
Size: 549.4 kB

A Luminox watch with a rotating bezel traditionally used for diving

 
 
 

5519978030 3da8e0afdd b

A Luminox watch with three complications (extra dials) and date. The stopwatch dials could come in handy for racing.
That is if you did not know that digital watches exist, or if you just want to pay hommage to old time racing

Field watches on the other hand are more targeted to infantry men, who as a rule do not need to calculate oxygen remaining or a flight path so these watches tend to be no-nonsense. Many watches at least in the aviation and field genres tend to be retro-styled, and hark back design-wise to a time where analog watches made sense for professionals in these, ehrm, fields.

Diving watches tend to focus on minutes elapsed, which dictates a lot of the design of the watch, including a rotating bezel and very clear marking of minutes. One good thing with diving watches is that they work well in darkness, and that they take pains in communicating during such conditions that the watch is still working and in distingushing the hour and minutes hands even when overlapping. Still, diving watches are being replaced by computers. The more finely marked red part of a diving watch's bezel is for ascent to the surface, although computers can now do these calculations safer and with better precision.

orisrmt
Click to view full-size image…
Size: 38.2 kB

Speaking about emphasizing minutes, this Oris Regulateur der Meistertaucher has nothing but a minute hand on the big dial, no hour or seconds hands; you have to look at the small dials for those

Aviator's/Pilot's/Astronaut's watches cater for the other extreme, being up in the atmosphere or even above it. At lower speeds the minutes are emphasized, at jet speeds time zones become important and for astronauts there is a logarithmic bezel on e.g. the Omega Speedmaster professional. The bezel may be used as a slide rule for doing calculations on e.g. fuel consumption.

These guys, Bremont, know how to make gorgeous looking pilot inspired watches:

5923677732 18965418df b
Click to view full-size image…
Size: 472.1 kB

This one is called "P-51", no doubt named after the  US WWII fighter plane

Ironically, aviators still seem to use and appreciate analog watches, While infantry men and divers now rely on digital computers on their wrists, many aviators already have a big honking computer in the form of a glass cockpit in front of them. So having a feeble such on the wrist may not add much.

The specialized brands — and a couple of general ones

There at least four brands that specialize in affordable field watches, and a couple of general brands that have watches in this category. The four specialized ones I've found are:

Momentum and Bertucci are most commonly of titanium, while Luminox and Timex have few offerings here. There are some more specialized brands here that I found via Ebay such as watches from Swiss army knife brands and a brand called Messerschmitt.

11244234635 d2bdaf04b6 b
Click to view full-size image…
Size: 196.0 kB

A Timex field watch

I also found nice-looking affordable field watches from general watch brands such as:

  • Citizen
  • Seiko
  • Alba (Seiko)

Citizen has the very nice Eco drive movement, which seems to last for decades without ever changing the battery.

3249931880 ba87c59f63 z
Click to view full-size image…
Size: 92.5 kB

A Citizen titanium watch in field watch style (Attribution) Author: Brent C
Source: https://www.flickr.com/photos/8521656@N02/3249931880
License https://creativecommons.org/licenses/by-nc-nd/2.0/

 

A Momentum titanium field watch

 

Aviator's and astronaut's watches

The original astronaut's watch seems to be the Omega speedmaster professional. However the movement of the newly produced watches is not the same as in those that actually went to space.

1200px Omega Speedmaster Automatic (reduced)
Click to view full-size image…
Size: 126.6 kB

An Omega Speedmaster

Aviator's watches have several roots, in France, Switzerland, Brazil, Germany, Italy, the UK and USA. The German variety seems popular and you can find watches in that design style by including the word Flieger ("pilot" in German) in your searches.

A Flieger watch claimed to be from around 1941

Here is a nice series of articles on the history of the pilot watch. It turns out that the original pilot watch brand is Cartier, for Santos-Dumont maybe as early as 1906. It is a rectangular watch with roman numerals. The next step was the brand Zenith already in 1909 setting the look of a pilot watch that we still know today.

One cool thing with some of the pilot watches is that they have a slide rule bezel. Here's how to use a slide rule bezel on an aviator's watch:

Alternatives to Titanium

If you want to avoid nickel, a nifty way of being able to wear any watch, nickel or not, is to use what is called a bund strap. Bund straps were initially made for German pilots to thermally insulate the metal watch from the wrist of the pilot. A bund strap is wider than the watch under the watch body, and then tapers off into a normal strap.

3589589735 8f6fbfd3af b
Click to view full-size image…
Size: 315.7 kB

 Sinn pilot's watch with a matching Jürgens bund strap

Another brand for Bund straps besides Jürgens, is Fluco. If you want a strap that keeps the generous width all the way round, it's called a cuff strap. Beware that the metal parts of straps may well contain nickel though!

Another option is to wear a plastic watch, but make sure it does not actually have a stainless steel back plate. Plastic is sometimes referred to as "resin".

The German Damasko brand makes its own nickel-free steel. Another German brand, Sinn, may also be nickel free.

Reading Sinn's glossary, they point out that nickel may not necessarily be released even from a nickel containing steel based alloy:

The level of nickel release is not determined by the nickel content of a metal, rather by its corrosion resistance. Only through corrosion processes can nickel escape from a steel alloy in the form of ions or complexes. In highly corrosion-resistant steel, the nickel therefore remains stably bonded in the steel even if it has a relatively high nickel content.

What I actually bought

I had done my research quite thoroughly, and here comes the problem when you've done that: You are getting so good at finding what you want that the deal you find is likely to not be around for long… I've learnt this the hard way through years of scoping out and finally deciding on a piece of merchandise. But come on, one hour!

After having looked at many Bertucci watches I finally set my eyes on a Bertucci watch with a crimson red clock face. Amazon.com said they only had one watch left, and so said Amazon.co.uk. I pondered for an hour, and then I decided to pull the trigger.

Gone. From the face of the Internet. Could not find it anywhere. I decided to stay around for a week to see if it would get re-stocked anywhere. Nope.

So after a week I decided to go for my second best choice, a classic model with a yellow/khaki clock face. I found a vendor that actually delivered to Sweden: Chronopolis.co.uk . I pulled the trigger and they charged me, bam! I got the last one, they were now out of it. Except, that three days later they sent me an e-mail that they never had it in the first place!

Was this a sign that I should buy something else or none at all? Eventually I ordered the same watch from Long Island watch:

bertu
Click to view full-size image…
Size: 71.0 kB

And here it is!
It's chunkier, bigger and more contemporary military than I thought, and the watch face is more khaki than yellow in daylight. But overall it looks really nice.
One reason it looks so contemporary military I guess is because the sand color scheme makes it look like a watch for desert-like environments where unfortunately a lot of action has been seen recently.

 

One thing to look out for is if the minute hand is unaligned with the minute markings. That is, as the seconds hand strikes twelve, the minute hand is right in between two minute markings.  You can see in the picture above that — compensating for the angle of the shot — the minute hand is pretty much aligned with a minute marking on the watch face. Unfortunately the seconds hand is straight down at this point in time.

This means that in practice the precision of the resolution of telling the time is about one minute.

It's easy to fix — you can adjust the precise alignment relationship between the seconds and minute hands with the crown: Just pull out the crown when the seconds hand hits twelve and then carefully turn the crown and make the minute hand align with the desired minutes marking. Then push the crown in to make the watch start to tick again. Now the minute hand will be aligned with a marking as the seconds hand strikes twelve.

Incidentally, getting the alignment correct was a huge boost in confidence in the watch! Such a simple thing to do with a big usability/trust payoff.

Image credits/links

https://www.flickr.com/photos/8521656@N02/3249931880
https://www.flickr.com/photos/brandoncripps/5372909422
https://c1.staticflickr.com/3/2874/11244234635_d2bdaf04b6_b.jpg
https://c2.staticflickr.com/6/5018/5519978030_3da8e0afdd_b.jpg
https://c1.staticflickr.com/3/2598/3948376935_dd847ee351_b.jpg
https://c1.staticflickr.com/4/3124/3116404826_2c6aa0a6a9_b.jpg
https://c2.staticflickr.com/6/5752/22513108655_c08226dd19_b.jpg

https://www.flickr.com/photos/noodlefish/5923677732/

https://www.flickr.com/photos/ironhide/734069587

https://www.flickr.com/photos/jmarkbertrand/3589589735

https://de.wikipedia.org/wiki/Armbanduhr#/media/File:Omega_Speedmaster_Automatic_(reduced).jpg

https://www.flickr.com/photos/technewatches/9648821410/

https://www.flickr.com/photos/27862259@N02/6962280707/

https://de.wikipedia.org/wiki/Fliegeruhr#/media/File:TESTAF-Sinn_EZM10_EZM9_857LHC.JPG


How to change a video to have one still image and all keyframes & resize it

published Jun 20, 2018 07:55   by admin ( last modified Jun 20, 2018 08:02 )

So I wanted to produce a very small video in size, where the imagery would be meaningless but the sound ok. I needed this for testing https://github.com/jeorgen/align-videos-by-sound that it actually does what it should, that is to correctly analyze how offset in time video files are that are filming the same event. I also wanted all frames to be keyframes, but since it is analyzing sound, that may not have been needed actually.

Make a video only display a static image on every frame (but still have many frames)

ffmpeg -i invideo.mp4 -i still_image.png -filter_complex "[1][0]scale2ref[i][v];[v][i]overlay" -c:a copy out.mp4

Scale the size of the video:

ffmpeg -i invideo.mp4 -filter:v scale=10:-1 -c:a copy out.mp4

The above sets the height proportinally to the width in pixels

Make a video only have keyframes

-force_key_frames 'expr:gte(t,n)'

(I think this works, haven't verified)

 


It seems I've finally left Sublime for VS Code and Atom

published Jun 17, 2018 11:08   by admin ( last modified Jun 17, 2018 11:08 )

I don't remember what it was now, but there was a piece of functionality I could get in VS Code and not in Sublime, and it made me switch. Recently I have been working with s-expressions so I have used Atom for that, since Atom has better integration of Parinfer than VS Code has (same guy doing the plugin for both).

Both Microsoft's VS Code and GitHub's Atom are built on top of Electron, which is basically a version of Google Chrome made to be an application platform.


Me talking at events the last month

published Jun 14, 2018 01:09   by admin ( last modified Jun 14, 2018 01:09 )

Swedish Land Registry and Blockchain event June 11

demo20180611
Click to view full-size image…
Size: 904.5 kB

WechatIMG9
Click to view full-size image…

Bloxpo event in May ­— presentation of Chromapolis

 


Two step keyboard shortcuts are actually pretty nice

published May 23, 2018 04:20   by admin ( last modified Jun 12, 2018 01:21 )

Summary: In the Atom editor, you can do ctrl+k and then another ctrl+character thingy, or jusr ctrl+k and another character. And that effectively triples the shortcuts and is kind of easy to do.

Edit 2018-06-05: This holds true for VS code too

I needed to use the Atom editor for its Lisp support (which VS code seems to lack, Edit 2018-06-05: VS code actually has parinfer too, Edit 2018-06-12 but it is inferior, maintained by the same guy & he focuses more on the Atom parinfer plugin) and I needed to uppercase some things, and there was no shortcut in the menu for it. I opened the shortcut setting and found that is reachable by

ctrl-k ctrl-u

It seems that ctrl-k works as a mode to put in a "subcommand" similar to screen where ctrl-a does that. It worked surpisingly well. However Atom does not list the shortcut next to the menu option, which kind of defeats or at least hampers the purpose.