Archives 2005 - 2019    Search

Towards a git-based backup system

published Dec 28, 2019 11:05   by admin ( last modified Dec 29, 2019 05:23 )

The following things need to be solved and I believe they are solved:

  • How to make it impossible to rewrite history on the remote, so your backup cannot be wiped from a malicious client
  • How to add all files, even those that git has never seen before
  • How to handle all merge conflicts automatically, by splitting the conflict into two files, one for each version (this is actually more for synchronization than backup)

How to make it impossible to rewrite history on the remote

Untested by me, but this can be handled by:

  • Only allowing fast forwards
  • Denying all deletes

It seems this needs to be done system-wide, so you need to dedicate a server, virtual server or container to this. And you probably need to configure it yourself, since I know of no services that would let you do this.

The settings are the following.

 git config --system receive.denyNonFastforwards true

and

 $ git config --system receive.denyDeletes true

Got that info from here: https://stackoverflow.com/questions/2085871/strategy-for-preventing-or-catching-git-history-rewrite

How to add all files, even those that git has never seen before

This is simply done with the git command:

git add -A

How to handle all merge conflicts automatically, by splitting the conflict into two files, one for each version

This can be handled by writing a custom merge tool.  According to the git docs:


    the configured command line will be invoked with $BASE set to the name of a temporary file containing the common base for the merge, if available; $LOCAL set to the name of a temporary file containing the contents of the file on the current branch; $REMOTE set to the name of a temporary file containing the contents of the file to be merged, and $MERGED set to the name of the file to which the merge tool should write the result of the merge resolution.
 

One thing that needs to be checked is what git does if two people rename the file at the same time. However this whole merge conflict matter, is actually more for synchronization. And the question is if you want that for backup? By using a de-duplicating file system it might be okay with separate repos for each device.

 


KeepassXC (Ubuntu) & Keepass2Android w. sync, initial tests

published Dec 26, 2019 12:35   by admin ( last modified Dec 27, 2019 03:37 )

Keepass is originally a Windows based password manager that has grown into an ecosystem of several Keepass-compatible password managers and also plugins based on the kdbx file formats, and some protocols.

Most of it, possibly all, is open source. Often you want to use the same credentials on several of your devices and hence you need to be able to sync.

Here I am testing KeepassXC (on Ubuntu but should be the same for most Linux distros, also available for Windows and MacOS) and Keepass2Android.

KeepassXC is a community rewrite of KeepassX, and Keepass2Android is recommended for Android on the site of KeepassXC. Both are open source.

In initial tests so far, the syncing seems to work. Update: I just tested changing memory hardness of Argon2 key derivation on the Android side, and it propagated fine to Linux. I then tested to change number of rounds of key derivation on the Linux side, and that propagated fine to Android. Impressive.

Syncing in KeepassXC and Keepass2Android

KeepassXC doesn't have any syncing capabilities of its own. It relies on external solutions doing that such as sshfs, DropBox and so on. I understand that they do not want to clutter the app with syncing code, but my guess is that there will be corner cases depending on the sync technology chosen. KeepassXC itself warns about it:

2019 12 26 23 09
 

Maybe the wording is unfortunate, but it seems in the screenshot above that you have to choose between compatible or safe for "Dropbox, etc"

Keepass2Android on the other hand has built in support for sftp, WebDAV and a number of magic folder solutions (DropBox, OneDrive et al.)

One other option is to use Syncthing to sync the databases. However in my experience Syncthing has corner cases of its own. Regardless of which technology you choose you should also make backups; synchronization is not backup and sync can get it wrong and wipe your stuff.

Here are some things to watch out for with KeepassXC and Keepass2Android:

KeepassXC on Ubuntu (and on other Linuxes)

Don't use snap (for most sync scenarios)

The snap-packaged version doesn't connect well to externally mounted filesystems. So if you want to use sshfs or any other such tech for syncing, it's better to use e.g. the AppImage version. This is actually mentioned in their FAQ;

Due to Snap's isolation and security settings, you cannot access any files outside your home directory.

That also means in my experience filesystems mounted inside of the user's home directory.

https://keepassxc.org/docs/#faq-appsnap-homedir

How to use sshfs

For automounting an sshfs volume that tolerates IP number changes and connection problems, on a Linux running systemd, check my blogpost FSTab: How to mount an sshfs volume that tolerates ip number changes and connection errors

Keepass2Android

Sftp rather than WebDAV

Use sftp rather than WebDAV (I'd say), because the sftp option allows a key file instead of a username/password combo, which the WebDAV option has as the only option.

Do not bother with specifying path

Specifying path in the sftp dialog seems tricky. In my experience better to leave it at "/" and navigate to the file in the next dialog.

Keystretching (sometimes called "key derivation") and encryption

Both KeepassXC and Keepass2Android have state of the art keystretching (key derivation) and encryption. The keystretching can be selected to be Argon2, which is almost too new for trusting completely (I would have liked to see scrypt in there too as an option). Still it's a bit of fresh air compared to the popular PBKDF2, which is not memory hard and hence ought to be vulnerable to attacks from ASICs.

For the symmetric encryption both AES and ChaCha20 are available, which ciphers underpin the Internet.

How good are KeepassXC and Keepass2Android at syncing with each other?

Well, that remains to be seen! Credentials seem to propagate fine so far. As written further up, I tested changing memory hardness of Argon2 key derivation on the Android side, and it propagated fine to Linux. I then tested to change number of rounds of key derivation on the Linux side, and that propagated fine to Android.

But having that set up, there is now a need for a backup solution. I'm actually thinking of using git for this, but have not made up my mind yet. But it makes sense to use well known and widely used software components. A malicious attacker that gets commit access to the remote git repository could rewrite the git history and hence delete old backups though.

it seems possible to actually make a Git repository where you cannot rewrite history. You need to do two things:

  • Only allow fast forwards for applying patches (I guess it is applying patches here? Right guys?)
  • Deny all deletes

 

 git config --system receive.denyNonFastforwards true

and

 $ git config --system receive.denyDeletes true

Got the info from here: https://stackoverflow.com/questions/2085871/strategy-for-preventing-or-catching-git-history-rewrite


Block any rewriting of the history of a Git repository?

published Dec 26, 2019 01:35   by admin ( last modified Dec 27, 2019 05:01 )

Untested by me as of yet, but it seems possible to actually make a Git repository where you cannot rewrite history. You need to do two things:

  • Only allow fast forwards for applying patches (I guess it is applying patches here? Right guys?)
  • Deny all deletes

 

 git config --system receive.denyNonFastforwards true

and

 $ git config --system receive.denyDeletes true

Got the info from here: https://stackoverflow.com/questions/2085871/strategy-for-preventing-or-catching-git-history-rewrite

Update 2019-12-27: This is a system level config, as presented. If you put it as a local, question is, how do you get it onto the server you're pushing too? Actually that would be a security problem. So, it seems you have to set up a git server specifically for this, and set the properties on the system level, as indicated. Potentially you could run that git server in a container.


FSTab: How to mount an sshfs volume that tolerates ip number changes and connection errors

published Dec 25, 2019 10:20   by admin ( last modified Dec 25, 2019 10:26 )

…and can only be accessed by one user. Also this is on Ubuntu and I guess should work on other systemd-based distros. Just installed and it works, we will have to see how it performs. Here is the whole shebang, I did not break it up with line continuation characters because it already looks a bit like line noise:

remote-username@server.example.com:/path/on/remote/to/folder /path/to/local/mount/point  fuse.sshfs noauto,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,x-systemd.automount,_netdev,user,idmap=user,follow_symlinks,identityfile=/home/localuser/.ssh/id_rsa,allow_other,default_permissions,uid=1000,gid=1000 0 0

1000 is the numerical id of the user that should be able to access the volume. You can check your uid and gid with the id command :

$username> id
uid=1000(username) gid=1000(username)


Also, while being root make sure that the server ends up in root's known hosts file:

$username> sudo -i
#root> ssh remote-username@server.example.com

That should trigger an add to known hosts dialog.

Also, you need to have sshfs installed:

$username> sudo apt install sshfs


 


TIL about the concept of slots that Git uses for merges

published Dec 24, 2019 01:20   by admin ( last modified Dec 25, 2019 01:54 )

According to an answer on this question: https://stackoverflow.com/questions/50232177/tell-git-to-resolve-conflict-with-a-third-file-which-is-the-answer and to the best of my understanding of that answer, git may use up to four slots in its index (staging area, cache) for a file.

If there is no merge problem, the file you've changed is simply in slot 0 and that's it. If there is a merge problem however:

  • Your version of the file ("ours") is in slot 2,
  • The version it is in conflict with ("theirs") is in slot 3
  • The most recent common ancestor is in slot 1

Your job is then to resolve the conflict within the file in your working tree which now has conflict markers based on these files, and place the resolved version in slot 0, and then git can commit it as per usual.

 

According to another answer by the same author (torek),

https://stackoverflow.com/questions/51977050/restore-a-remote-file-after-solving-a-merge-conflict-using-git

you can get to the contents of a file in a slot like this:

git show :3:path/to/filename

…and then you can store that somewhere. This means that you can save away a conflicting version under a new name, which ought to be good for when you need some kind of conflict resolution when no human is around (or at least no git-human around).

git show :3:path/to/filename > path/to/new-filename

Actually easier is probably to save away the local commit under a new name, and then just remove the original filename

git show :3:path/to/filename > path/to/new-filename

And then git add that and some git reset on path/to/filename.

Automatically create a diverging file instead of merging two versions

Here is a way that could work for that:

Do a git merge

git merge their-branch

If that does not go cleanly, save away the incoming conflicting file with

git show :3:path/to/filename > path/to/new-filename

Then do a

git merge --abort

Then do a git merge again buth with strategy ours:

git merge their-branch -s ours'

Actually, a better method may be to specify a custom mergetool

git mergetool -t /path/to/custom/mergetool

Then that tool can whatever it wants.

According to git docs:

the configured command line will be invoked with $BASE set to the name of a temporary file containing the common base for the merge, if available; $LOCAL set to the name of a temporary file containing the contents of the file on the current branch; $REMOTE set to the name of a temporary file containing the contents of the file to be merged, and $MERGED set to the name of the file to which the merge tool should write the result of the merge resolution.

 


Use setgid to allow 2 users to edit all files in a directory

published Dec 19, 2019 12:50   by admin ( last modified Jan 8, 2020 11:23 )

Linux: A setgid on a directory can control with what permissions contained files and directories are created. Let's say you have two users on a development machine, foo and bar. You want user foo to handle the GUI (such as X), git commits, pulls and pushes, but you want user bar to be the one that runs the files. User bar is not used for anything else and it does not matter if user foo can read all bar's files.

This division should give a bit of protection against running malicious code, as long as user bar has nothing worth looking into or executing. Bear in mind though that having shell as a user on the machine may make it easy to escalate, depending on the setup.

Now on a directory inside of user bar's home directory,  ensure the group to be group bar (it probably alread is):

sudo chown bar:bar adirectory

Now set the permissions of the directory to 2770

sudo chmod 2770 adirectory

Now make sure the user foo also has the group bar.

Now files created by user foo or user bar will be editable also by the other user. The extra "2" in the beginning of 2770 is the setgid instruction.

Nota bene that this only works if the users have a generous umask, such as 002. If e.g. user foo has umask 022 then the directories it creates will not be writable by bar. One way around this is to use umask 002 for user foo when working in the directory.


How to start an AVD in Android Studio on Aspire 5 with an Intel GPU (I think)

published Dec 02, 2019 04:50   by admin ( last modified Dec 02, 2019 04:50 )

I could not get a Google Pixel 3A virtual device to start in Android Studio 3.5.2 on Ubuntu 19.04. The solution seems to be use a the "Swiftshader" GPU emulation. However I could not enable it. Eventually I edited the config file directly on disk, and made sure to enable cold boot. Without cold boot it crashed, due to Swiftshader not supporting checkpoints afaict.

I found the device file as

$HOME/.android/avd/Pixel_3a_API_29.avd/config.ini

And I changed to:

hw.gpu.mode=swiftshader

However after starting up the editor, it seems to have changed it to:

hw.gpu.enabled=no
hw.gpu.mode=off

But at least it starts up the AVD now!
 


Trying to understand colour grading and exposure: Workflow & edits

published Dec 01, 2019 03:10   by admin ( last modified Dec 01, 2019 07:44 )

In this video, beginning at 9:28, Color Grading Central goes through how to make video color grading and light editing in  Davinci Resolve 16: https://www.youtube.com/watch?v=NoyDMKqo80U

Now colour grading video is not something I've ever done, but it so happened I watched this video on the topic and I realised that the info contained might be especially helpful for those of us who just post images and videos occasionally, to a at least get the bascis right.

So here's how I interpret the info in the video: Correcting the color and exposure in a video or image is a lot about using the available dynamic range on screen and in the human eye. In that way the material that you are presenting is using as much as possible of the viewer's perceptual space, so to speak.

1. Start with adjusting the exposure. This means adjusting the darkest shadows to be near black (since then we are then using as much as possible of the perceptual space available).

2. After that, adjust the highlights to near white.

3. After that, the midrange probably sits a bit too high in the image. Adjust the curves so that the midrange spreads out across the perceptual space to give the most detail.

4. Adjust the white balance. Find an area in the image that is supposed to be white, and use that as a reference point to white (in the 1980s I did some video, and back then you would white balance the video camera by pointing it at a white paper and press the "white balance" button)

5. Adjust the saturation of the image, by analysing how much there is and then e.g. increase it, again to fill up the perceptual space.

 

Bonus point, there is an aestethic called "orange/teal" which gives faces a special colour that pops against the background.

 

In professonal video, it seems you often record images with an exposure curve that is unnatural, but preserves the most dynamic. That is, you always need to colour grade in post to get back to natural. This unnatural curve is called "log profile" or "flat profile". The logic seems similar to audio technologies such RIAA correction in record players, and Dbx or Dolby noise reduction: You record in a compressed or expanded way, and then in these audio cases recompensate at playback to improve the signal to noise ratio..


One of the world's best video editors, on Ubuntu for free!

published Nov 30, 2019 01:33   by admin ( last modified Nov 30, 2019 01:33 )

Video editing on Linux: Blackmagic's Davinci Resolve, one of the world's best video editors—running on my Ubuntu Linux laptop for the price of nothing. Incredible!
#YearOfTheLinuxDesktop

 

2019 11 29 23 53
Click to view full-size image…
Size: 921.5 kB

 

If you are on Ubuntu:
1) Make sure you have an Nvidia video card
2) Install both cuda and opencl libraries
3) Use this conversion script to make a .deb from Resolve install danieltufvesson.com/makeresolvedeb
4) Install.
5) Use ffmpeg to convert files to MOV, Resolve supports that on Linux.

Gramtropy — a way to make pronouncable passwords with defined entropy

published Nov 29, 2019 03:55   by admin ( last modified Nov 29, 2019 03:53 )

https://github.com/sipa/gramtropy

Pronouncable passwords are a heck of a lot easier to read-type.

"[Gramtropy] aims to solve the problem of generated passwords that are pronouncable according to arbitrary rules, while simultaneously guaranteeing a given security level (in bits)"


How to tell NetworkManager (i guess?) to use a VPN for a connection on Ubuntu

published Oct 20, 2019 05:03   by admin ( last modified Oct 20, 2019 05:03 )

Set it in nm-connection-editor. If that program does not exist on your system, install it.


If you need to change or create volumes on an LVM system, use lvm

published Oct 18, 2019 12:25   by admin ( last modified Nov 03, 2019 03:27 )

The lvm command line program at least on Ubuntu, is well documented with lots of help inside of the shell that is created when you type sudo lvm

LVM is a way of organizing disks into volumes on i.a. Linux systems


Bruce Schneier on quantum computers, encryption and the future

published Oct 17, 2019 06:25   by admin ( last modified Oct 17, 2019 06:25 )

Great quantum breaking overview by Schneier:
"Maybe the whole idea of number theory—based encryption […] is a temporary detour based on our incomplete model of computing"

"Symmetric cryptography is so much nonlinear muddle, so easy to make more complex"

 

https://www.schneier.com/essays/archives/2018/09/cryptography_after_t.html


Navigo.js works as replacement router for Riot.route in Riot4.js

published Oct 16, 2019 05:50   by admin ( last modified Oct 16, 2019 08:40 )

The "Riot route" router of the Riot project, does not work with Riot.js 4 at the time of this writing. It had to be switched out and I settled for Navigo.js.I have not used it extensively , but here are the changes I had to make, including explicitly unmounting components and respecting that riot.compile is now asynchronous and separate. The code below is for the on-the-fly compiling:

Before with Riot3.js and Riot.route

 <!-- Load riot live compiler for on-the-fly compiling, it compiles automatically -->
  <script src="http://cdn.jsdelivr.net/npm/riot@3.13/riot+compiler.min.js"></script>

  <!--   Load riot's router -->
  <script src="https://cdn.jsdelivr.net/npm/riot-route@3.1.4/dist/route.js"></script>

    route('login', function (name) {
      riot.mount('div#mainview', 'login')
    })
    route('profile', function (name) {
      riot.mount('div#mainview', 'profile')
    })
    route('mytasks', function (name) {
      riot.mount('div#mainview', 'mytasks')
    })

    route.start(true)
    route('login')

 

And now with Riot4.js and Navigo

 <!-- Load riot live compiler for on-the-fly compiling, it does not compile automatically -->
  <script src="https://cdn.jsdelivr.net/npm/riot@4/riot+compiler.min.js"></script>

  <!--   Load the navigo router -->
  <script src="https://cdn.jsdelivr.net/npm/navigo@7.1.2/lib/navigo.min.js"
    integrity="sha256-EfgFBwdiJuG/NJPYFztHuhSHB1BP4y2yS83oTm6iP04=" crossorigin="anonymous"></script>

  <!-- Configure the router and load initial route -->
  <script>
    const router = new Navigo(null, true);

    riot.compile().then(() => {
      riot.mount('#mainview', {}, 'login')
    }).then(() => {
      router
        .on({
          'login': function () {
            riot.unmount('#mainview', true)
            riot.mount('#mainview', {}, 'login')
          },
          'profile': function () {
            riot.unmount('#mainview', true)
            riot.mount('#mainview', {}, 'profile')
          },
          'mytasks': function () {
            riot.unmount('#mainview', true)
            riot.mount('#mainview', {}, 'mytasks')
          }
        })
        .resolve()

      router.notFound(function (params) {
        riot.mount('#mainview', {}, 'login')
      }).resolve()
    })

 

 


Finding a good one-piece, clip-on voice recorder (dictation recorder)

published Oct 10, 2019 10:50   by admin ( last modified Dec 15, 2019 02:52 )

Up to €/$/£200 I have found these candidates:

  • Sony ICD-TX800 at around 200 €/$/£, a bit on the expensive side
  • Olympus VP-10 at around 100 €/$/£, great sound according to Youtube videos, but I've tried the clip and it slides off of garments, so it is out of the race
  • Sony icd-tx650 at around 150 €/$/£, seems to have built-in compression that gives a weird pumping effect according to a Youtube video. But this video shows it working great

 

Background

As an extra precaution, I like to mic myself up with an extra device when I do a presentation or lecture that is recorded to video. It should be:

  • A self-sufficient unit
  • Discrete and small enough to not draw questions or opinions (but can be clearly visible)
  • Out of the way enough to not irritate the person who mics me up with the usual gear
  • Equipped with a strong enough clip to stay put in different positions

 

Another use case is for micing up a person I would interview. In that case it should be:

  • Easy enough to place on any garment so that a good recording can be obtained
  • Look classy enough so that the subject does not feel uncomfortable with say a taped-together contraption

keywords: dictaphone

 

Update 2019-12-14: I got the Sony icd-tx650. Have used it once to mike up a female speaker, and it distorted a bit at the medium recording level setting. I have now adjusted it to low and see if that will work for the next recording session. The Sony icd-tx650 remained firmly in place, clipped to a t-shirt collar.

It makes sense to use the low recording level for miking up a person with the device on the t-shirt collar, as compared to e.g. recording a room, or pointing the recorder at a lecturer from some distance. In both those cases higher sensitivity is needed.

Soon we should see 24-bit recording with enough headroom so that recording levels don't matter. It's already used for medium-end and high-end audio recorders and the Motu M2 sound interface.


A report on firmware/OS hardening in IoT devices

published Oct 09, 2019 09:30   by admin ( last modified Oct 09, 2019 09:30 )

"the more area covered, the better the binary hardening (on average)." Synology seems to have better hardened IoT devices than others, Cyber-ITL takes a look at 22 brands,

EF9f85rXoAEjzzZ
Click to view full-size image…
Size: 51.8 kB

OpenWRT looks better than DD-WRT too.

 

https://cyber-itl.org/2019/08/26/iot-data-writeup.html


Calling on videoing witnesses

published Oct 07, 2019 04:56   by admin ( last modified Oct 07, 2019 04:56 )

A cool idea would be to have an app that summons witnesses to wherever you are, and they film, streaming.


Javascript template/app libraries with the least steep learning curve

published Sep 23, 2019 10:00   by admin ( last modified Sep 26, 2019 01:02 )

About a year ago I worked in a project where the devs wouldn't have had the time to learn a complex framework such as React.js. They can learn it later if needed. Eventually we went with Riot.js .

Here were some of the initial ideas:

  • Prefer existing HTML tags over made-up tags
  • Separate html pages for separate, well, pages
  • Ability to ony partially take over a page
  • Ideally for each html tag, you can specify code for it, and on what events that code should trigger
  • Or the other way around, specify for an HTML element, what it is bound to

So here is a list of template/app libraries/frameworks and my initial impression of the learning curve compared to what they can do:

Thingy Learning curve Plain
HTML/
Javascript
Incremental
use
comments
Riot.js — Simple and elegant component-based UI library low No Yes custom tags, would prefer attributes to existing tags, there seem to be some edge cases that are documented but that could trip devs up
Svelte • The magical disappearing UI framework high      
Flight by Twitter       Uses events exclusively between components, so no references. Not updated for 3 years
Marko high      
Cell.js low Yes   Completely expressed with javascript object literals
stimulusjs   Yes   Uses normal tags with custom attributes, as for example Zope Page Templates does
Polymer high      
T3 JavaScript Framework lowish     Uses events like Flight.js, but marshalls them into an event bus. Not updated in 2 years.
Famous/framework       Not updated in 3 years
infernojs JSX      
React.js High      
Vue.js        
EJS -- Embedded JavaScript templates High      
Mithril        
Sam-js Low Whatever No! Uses SAM pattern which I like, & is similar to how I did things with desktop GUIs. APIs not stable, says site.
Ractive       No widget/compoent arch.
Meiosis        
Flux        
lit-html       Efficient rendering of templates from javascript template strings
https://www.npmjs.com/package/jsx-render       Render JSX without the React overhead

 


How to concatenate 2 (or more) mp4 files with ffmpeg

published Sep 20, 2019 01:12   by admin ( last modified Sep 20, 2019 01:12 )

On Linux, put the file names with paths and prefix each with "file" in a file called e.g. files.txt:

file ./foo.mp4

file /bar.mp4

Then

ffmpeg -safe 0 -f concat -i files.txt -c copy concatenated.mp4

If the man pages refuse to show on your Ubuntu server, even though you've installed it/them

published Sep 06, 2019 04:38   by admin ( last modified Sep 06, 2019 04:38 )

There might be a file that just silently thwarts your attempts

/etc/dpkg/dpkg.cfg.d/excludes

https://github.com/tianon/docker-brew-ubuntu-core/issues/126#issuecomment-394403782