Archives 2005 - 2019


Comprehensive measurement of accuracy of cheap humidity sensors

published Dec 26, 2015 12:15   by admin ( last modified Dec 26, 2015 12:16 )

A guy has put in a lot of time in order to check if some inexpensive humidity sensors actually are accurate (spoiler: They are surprisingly good).

 

Instead of varying concentrations of a particular salt I use saturated solutions of several different salts. Depending on the solubility of a particular salt, a different relative humidity will be generated in equilibrium with the solution.


Read more: Link - Test and Calibrate DHT22 / AM2302 / RHT03 hygrometers

Compare DHT22, DHT11 and Sensirion SHT71

One can buy the DHT22 here and there are some libraries too:

DHT22 temperature-humidity sensor + extras ID: 385 - $9.95 : Adafruit Industries, Unique & fun DIY electronics and kits


A simple system for sensor & controls at home

published Dec 26, 2015 08:45   by admin ( last modified Dec 27, 2015 05:37 )

A lot of home tinkering would be easier if there were ready made units that you could link together either wirelessly or via USB. Maybe this already exists in some way, shape or form but it is not obvious to me where and at what price in that case.

It seems to be that for home tinkerers working wth with Arduionos ESP8266s and Raspberry Pis, there should be room for a whole lot of compartmentalization.

Gui and communications unit

Should be Android phones and tablets. There is no way it is cheaper and more convenient to cobble together a touch screen, batteries, USB/Wifi/Bluetooth/3G/4G/LE connectivity and charging circuits than simply using an Android device. Many nowadays even have wireless charging.

Power control unit.

Controlled over e.g. USB. Switches on and off mains power, and guaranteed to do so in sequence between different connectors, in case you want to mimic a rotary switch. There is a USB controlled multi channel relay on Amazon, but it seems to close all circuits on boot, which can be anything from annoying to catastrophic depending on your project Amazon.com: SainSmart USB Eight Channel Relay Board for Automation - 12 V: Industrial & Scientific.

From one of the reviews:

I think the biggest issue is that way the thing reacts to rebooting the machine it is connected to. I haven't figured out the exact sequence, but I think it turns on all relays when the computer/usb reboots, and then again when the driver loads

So at the moment it may be a better idea to put together a device from either a Raspberry Pi or an Arduino, and then connect that to a relay board and mount it all in a case.

Network interface unit.

Connects to the power control unit. Could be bluetooth, bluetooth LE, WiFi, 3G/4G/LE or something.

Sensors

Sensor, powered locally by AA, AAA or button cell. Reports its readings wirelessly. Texas Instruments seem to have a candidate  SimpleLink™ Bluetooth Smart®/Multi-Standard SensorTag - CC2650STK - TI Tool Folder.

10 sensors including support for light, digital microphone, magnetic sensor, humidity, pressure, accelerometer, gyroscope, magnetometer, object temperature, and ambient temperature

 

 


SimpleLink - A bluetooth LE sensors module from TI

published Dec 26, 2015 06:51   by admin ( last modified Dec 26, 2015 06:51 )

Seems to be a lot less hassle than soldering your own unit.

 

10 sensors including support for light, digital microphone, magnetic sensor, humidity, pressure, accelerometer, gyroscope, magnetometer, object temperature, and ambient temperature


Read more: Link - SimpleLink™ Bluetooth Smart®/Multi-Standard SensorTag - CC2650STK - TI Tool Folder


Use "nopack" for local x2go connections on fast networks

published Dec 26, 2015 04:35   by admin ( last modified Dec 27, 2015 05:48 )

I tested a couple of different compression and encoding methods (as they call them) with an x2go client running on a Raspberry pi B+ running Raspbian. It is connected via 100mb/s Ethernet to an early Core2Duo server running Ubuntu 15.04  with 8gigs of RAM. The desktop is LXDE.

I entered this url:

http://www.kjell.com/se/sok?query=rel%C3%A4

In the Firefox browser on the server machine. Firefox has funky scroll settings and by fiddling with them (disabling smooth scrolling and hardware acceleration among other things) one can get a much faster scroll under x2go. I did not do that fiddling before doing the tests. However the below data I believe still holds for the speed of trying to render Firefox in an x2go unfriendly configuration and hence the general rendering speed on the client side.

I scrolled to the bottom of that page and hit "top" on the keyboard to make the browser scroll to the top of the page. For some compression and encoding settings it could take 5 seconds. With nopack it took 2 seconds (roughly, timed with the ticks from my wristwatch).

So, nopack is best, But 2 seconds on a local network is still unacceptable while browsing. On my two year old Intel 4200U i5 laptop it is instantaneous.

With an insecure XDMCP connection between the two same computers (Pi, C2D) using Xephyr in Remmina on the client and lightdm on the server, the scroll takes 1 second. Using the native Firefoc browser on the Raspberry pi, it takes 1.5 seconds, so maybe the B+ just isn't cut out to be a computer you can browse smoothly on with a big screen, anyway you slice it.

Update: You can configure the Raspberry Pi to use more of its memory for graphics. It is not clear to me yet if this will improve performance, but it will look better subjectively at least. A discussion on how to configure graphics memory is here:

Raspberry pi 2 1024M Increase Gpu Memory to 512 at least - Raspberry Pi Stack Exchange

Reference info here:

RPiconfig - eLinux.org

And as mentioned above, changing the scroll settings in Firefox helped a lot. In fact you get it down to close to instantaneous in x2go. Will try to update on the settings that helped with that.


How to use a Raspberry Pi B+ as an x2go client for remote desktopping to Ubuntu

published Dec 23, 2015 12:35   by admin ( last modified Dec 26, 2015 08:12 )
  1. Install Raspbian on the raspberry Pi B+.
  2. sudo apt-get install x2goclient
  3. Make sure the server that you control accepts the crypto that the Raspbian version of x2goclient needs (a security downgrade) Kex error: Get x2goclient to work on Raspberry pi
  4. Install LXDE on the Ubuntu server to be controlled, most other desktop environments on Ubuntu blackscreen in x2goclient. Black screen: Get x2goclient to work with a Ubuntu x2go server — jorgenmodin.net
  5. Use the "nopack" setting (i.e. no compression) for local x2go connections on fast networks, and fiddle with the memory reserved for the graphics

Black screen: Get x2goclient to work with a Ubuntu x2go server

published Dec 22, 2015 03:45   by admin ( last modified Dec 25, 2015 08:03 )

I got a black screen when trying to use x2go between two Ubuntu 15.04 machines. However by installing the MATE desktop environment on the server I got it working. Had no luck with the Unity and Gnome options.

sudo apt-get install mate-desktop-environment

If you're on Raspbian (Debian for Raspberry pi) the version of x2goclient there does not support MATE. However LXDE/Lubuntu seems to work fine in my tests as an alternative on the server side.

sudo apt-get install lubuntu-desktop

According to doc:de-compat [X2Go - everywhere@home] these desktop environments work:

  • Lxde (not Lxqt)
  • Xfce
  • Mate
  • Icewm
  • Openbox
  • Gnome <= 3.8 can work with workarounds, anything above not recommended
  • KDE 3 and 4 supposed to work, but not KDE 5.

Kex error: Get x2goclient to work on Raspberry pi

published Dec 22, 2015 02:25   by admin ( last modified Dec 22, 2015 02:25 )

The Raspbian version of X2GoClient is too old to connect to an X2GoServer for Ubuntu 15.04:

wiki:repositories:start [X2Go - everywhere@home] - Newer / more packages are not available currently. X2Go's upstream package archive does not include Raspbian packages.

You can get the Raspbian x2goclient to work again. You need to configure the ssh server (Ubuntu 15.04 in my case) where the x2goserver resides to accept deprecated (and hence somewhat insecure) ciphers. This line added to my sshd_config made it possible:

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1

Do note that it should all be on one line, especially the "KexAlgorithms" should not be on its own line. Because then ssh logins get disabled wholesale, I am privy to know.

And you will need to delete the server from the ~/.ssh/known_hosts file, or x2goclient will refuse to connect.


VNC, Remmina, X2Go: Remote desktop options for Linux

published Dec 22, 2015 02:10   by admin ( last modified Dec 22, 2015 03:47 )

I have used VNC for many, many years - but I am now experimenting with x2go to see if it can give better performance when the client is run from a Raspberry pi, and over remote connections.

VNC

There are many varieties of VNC clients and servers. TightVNC is in the Ubuntu repositories, as is Xvnc. TigerVNC should allow changing the resolution of the desktop on the fly.

Some advantages of VNC (points below taken almost verbatim from here):

  • Can be integrated into a hypervisor. It is integrated into KVM, Xen, VMware Workstation. This enables administering a VM before the OS is installed, and you can administer a VM with no network connection. You connect to the host's IP address, not the VM's IP address.
  • When properly setup with VirtualGL, TurboVNC can take advantage of a server's graphics card for 3D rendering.
  • VNC has support for 3D/compositing desktops such as GNOME3.

 

X2Go

X2Go seems to be an offshoot from Nomachine NX, which is a high-performance-to-bandwidth remote desktop solution used by large companies such as Ericsson.

I am currently testing X2Go, and actually sharing the screen from a VNC-started X session that has its own desktop on the server. The reason for this is that the server does not have a monitor connected to it and I already have a VNC server running there with its own desktop. There are other ways to do this on a headless server, mentioned here: Running a lightweight GUI on your vps via X2Go - Tutorials and Guides - vpsBoard. I have now tested a proper x2go connection to a Ubuntu server running the MATE desktop environment, much snappier.

start [X2Go - everywhere@home]

Advantages of X2Go (points below taken almost verbatim from the same place):

  •  Audio support
  • Folder Sharing
  • Printer sharing
  • High performance for 2D desktop usage
  • Works natively over SSH

Update: X2Go has this propensity to disappoint me, eventually. Last time I tried it it segfaulted. That did not happen this time. Right now it is instead because the Raspbian version of X2GoClient is too old to connect to the modern version of X2GoServer for Ubuntu 15.04:

wiki:repositories:start [X2Go - everywhere@home] - Newer / more packages are not available currently. X2Go's upstream package archive does not include Raspbian packages.

Still, it works from laptops, which is half the solution I wanted, at least. Will try to see if it is possible to somehow make Raspbian x2goclient play nice.

Update: You can get the Raspbian x2goclient to work again! You need to configure the ssh server (Ubuntu 15.04 in my case) where x2goserver resides to accept deprecated (and hence somewhat insecure) ciphers. This line added to my sshd_config made it possible:

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1

Do note that it should all be on one line, especially the "KexAlgorithms" should not be on its own line. Because then ssh logins get disabled wholesale, I am privy to know.

And you will need to delete the server from the ~/.ssh/known_hosts file, or x2goclient will refuse to connect.

Remmina

Untested by me so far. Seems to be able to speak many protocols and doesn't propose a new one. The web site seems to not have been updated in one year, but the there is recent activity in the source code repository. It should be able to connect to both VNC and Nomachine NX, among other protocols.

FreeRDP/Remmina

"Remmina is a remote desktop client written in GTK+, aiming to be useful for system administrators and travellers, who need to work with lots of remote computers in front of either large monitors or tiny netbooks. Remmina supports multiple network protocols in an integrated and consistent user interface. Currently RDP, VNC, NX, XDMCP and SSH are supported."


How to open a WD My Cloud case & extract the HD

published Dec 21, 2015 03:25   by admin ( last modified Dec 21, 2015 03:24 )

The video linked below worked like a charm for me to open the case on a Western Digital My Cloud drive:

Read more: Link - How to open a WD My Cloud Case - YouTube

(Disclaimer: I take no responsibility for any damages by following these instructions)

I used more than one credit card so that there was always a card still inserted on the other side.

In order to extract the hard drive, you can tilt it carefully so that you can access the screws that go through two of the rubber bushings that suspends the hard drive in the case. On my drive the screw heads were of an unusual standard, I think it is called "Torx" and luckily I had torx bits fitting.

Then remove the screws that mount the circuit board to the hard drive, and now you should be able to turn out the hard drive in the opposite direction of the circuit board. Use the torx screwdriver to remove the bushings on the other side of the drive, and then you need a hex screwdriver to remove the metal thingamajig that is on that side and now you should be left with a hard drive e.g. suitable for mounting in another case, or to be replaced with another drive.

Mine is now humming along in a USB 3.0 case, where the USB port is in slave mode opposed to the not-so-useful master mode on the WD My Cloud.


Pronouncing numbers with powers of ten in them in English

published Dec 21, 2015 02:12   by admin ( last modified Dec 21, 2015 02:12 )

As you probably all now in Swedish 4*10⁵ is pronounced "Fyra gånger tio upphöjt till fem" :) .

But what is it called in English? "Four times ten raised to the power of five" is kind of long. Apparently you can say like this:

"Four by ten to the five"

 

Normally I'd say five by ten to the five and two by ten to the eight.


Read more: Link - How to pronounce 5x10^5, e.g. | WordReference Forums


Forking and joining a javascript promise chain

published Dec 17, 2015 09:30   by admin ( last modified Dec 17, 2015 10:52 )

So, I have this promise chains in Bluebird that works like a pipeline: The data comes in in one end and then gets processed and pops out in the other end. Although you actually never leave a promise chain once you're in it. So it doesn't actually pop out, but inside the last step a function call is made to a function outside of the chain.

Now, new directions are given to the developer: Besides the data which is the input of the promise chain, the chain needs to split up a bit temporarily and join again. During the split up the parallel pipelines needs to process slightly different data from each other, as defined in an array. Well, how do you do that?

It's not only a question of just doing work in parallel, since each line has one parameter now that is different from other lines. Here is one way that seems to work: You inject the array into the this object with bind and then have one step that takes that array, and makes a new array with each array value combined with all the input data.

var P = require("bluebird")

var colors =['foo', 'bar'] // the different stuff

// "0" below stands for some kind of input data
P.resolve(0).bind({colors:colors}).then(forkIt).map(remainingStuff).then(loggIt).done()

function loggIt(stuff){
    console.log("logging")
    console.log(stuff)

    }

function forkIt(stuff){
    var ret = []
    for (color of this.colors){
        ret.push({'color':color, value:stuff})
    }
    return ret
    }


function remainingStuff(stuff){
    return P.resolve(stuff).then(baz)
    }

function baz(bletch) {
     return bletch
    }

It would be even nicer if you then could use bind in each line to have this have  different contents per parallel line. But in my initial tests, this is global to the all lines so that does not work. Which can be seen of you add an additional step called bat:

function remainingStuff(stuff){
    return P.resolve(stuff).then(baz).then(bat)
    }

function baz(bletch) {
    this.color = bletch.color
     return bletch
    }

function bat(bletch) {
    console.log(this.color + bletch.color)
     return bletch
    }

 

Which will make the code print among other things:

barfoo
barbar

Instead of the desired:

foofoo
barbar

Countersigning needed for digital signatures to go big

published Dec 11, 2015 05:55   by admin ( last modified Dec 12, 2015 01:37 )

In fact, you should have a pre-defined list of public keys that needs to be used and countersign your signature for your signature to be valid

A big problem with digital signatures (sometimes called electronic signatures) is that somebody can steal the signature making device and sign away on stuff, pretending to be you.

One way of mitigating that is to have your signature only to be valid if other people countersign it. For any contract that isn't just a small purchase, it is worth the extra effort to call up some people who trust you and ask them to countersign, or meet up in person and do that. Wikipedia - Countersign (legal).

In fact, you should have a pre-defined list of public keys that needs to be used and countersign your signature for your signature to be valid. At least m of n of them must be used, and if one of them gets stolen then as long as the original owner still has access to it, there would be enough redundancy that it does not matter as much (but that key should be marked as stolen of course and there should be a threshold for how many such keys can be used).


Web server proxies, as work queues with apps pulling requests

published Dec 03, 2015 03:55   by admin ( last modified Dec 12, 2015 12:06 )

One thing I have thought about for a while is to reverse how load balancers work. Today to my understanding, they receive web requests and then distribute them across an array of application servers. The proxy can measure the response time and as such can decide which server to send the next request to, or it can use some kind of algorithm, such as round-robin.

But the component that knows best when it is ready to process a new request is in a way the app server itself. So why not let the proxy maintain a queue of incoming requests and then have the app servers pull requests from the queue?

Having such an architecture leads to another interesting feature, that app servers can be created and then simply start pulling work from the proxy, without the proxy needing to be reconfigured to accommodate for the new instance of an app server. If using SSL certificates, adding app servers could just be a question of firing up new instances and having them poll the queue while blocking until they get a job from the queue.

Furthermore the proxy can, if an app server is slow to process a job, serve out an intermediate response, that tells the client to continue polling (e.g. through JSON-RPC) until the result comes through. In this way you can work around fronting server timeouts.

So if a request becomes a "job", caching can be memoizing a job, and the proxy can decide whether to serve a cached version or wait for a job to finish. If a certain job with a memoized signature is slow to respond, its caching can be increased. If an app server serves jobs much slower than other ones, it can be put out of commission.

However an app server may benefit from serving several requests concurrently due to I/O waiting and in that case the setup gets a bit more complicated.

Update: I've got a reply on Reddit pointing out some problems with the above design


Running node.js applications under Java - initial check on what's available

published Dec 03, 2015 03:35   by admin ( last modified Dec 25, 2015 09:19 )

Apparently the new version of the javascript implementation on the JVM - Nashorn - is quite fast. It would be interesting if one could move Node.js apps from Node onto the JVM. Here are some examples purportedly to work under the JVM:

https://www.dropbox.com/s/xdna3pvgf5rjqu9/nashorn-demos.zip?dl=0

The example using express.js seems to use avatar.js, which is not 100% compatible with node.js according to their own documentation.

Still, being able to integrate Node.js apps into the JVM is interesting. Even more interesting might be if t were possible to leverage the multi threading of the JVM through e.g. some language extensions to Javascript on Nashorn.

Frequently asked questions about Nashorn and Node.js | Niko Köbler – Software-Architect, Developer & Trainer

Update: It seems like the Avatar.js project is now defunkt. A blog post from 2015-02-12:

 

We'd like to thank those who have provided us feedback throughout the life of Project Avatar. It has been very much appreciated and helped us us more than you know.
 
Avatar and Avatar.js project pages, along with the code and related binaries, will remain available for folks to learn from and leverage.

Gitwatch

published Nov 27, 2015 11:51   by admin ( last modified Nov 27, 2015 11:51 )

A bash script to watch a file or folder and commit changes to a git repo

 

A way to watch a file and back it up to git on every save.


How to override the reported hostname in Ganglia

published Nov 23, 2015 01:52   by admin ( last modified Nov 23, 2015 01:52 )

In the global section, put:

override_hostname = your.preferred.hostname


yEd - a very nice charting tool

published Nov 19, 2015 07:30   by admin ( last modified Dec 03, 2015 02:49 )

Written in Java and hence might run on many operating systems. Really nicely done. You get good results quickly.

Example (I suck at graphics though):


Keep your tmux sessions although client has upgraded

published Nov 19, 2015 04:59   by admin ( last modified Nov 19, 2015 04:59 )

I started an upgrade of a machine and decided to do it over tmux so that losing the SSH connection would be no biggie. However when I came back I wanted to reattach to the tmux session and couldn't; the client had been upgraded so it wouldn't connect to the old sessions. Then you can do like this:

pgrep tmux
<process id>
/proc/<process id>/exe attach



 

Pretty awesome hack, if you need your tmux working and not want to lose all your sessions:


Read more: Link - tmux - protocol version mismatch (client 8, server 6) when trying to upgrade - Unix & Linux Stack Exchange


"inner-dest plugin username not found": You need to use 3.6+ of syslog-ng

published Nov 16, 2015 01:31   by admin ( last modified Nov 16, 2015 01:31 )

If you get the message:

Error parsing afmongodb, inner-dest plugin username not found

in syslog-ng when trying to connect to mongodb, it is likely that you are running a version of syslog-ng that has support for mongodb, but not support for mongodb authentication. An example of such a versions is 3.5.3-1, which is as of this writing the one in the standard repositories for Ubuntu 14.04LTS. Version 3.6+ are supposed to have support for mongo authentication.

Revision: 3.5.3-1 [@9695e81] (Ubuntu/14.04)"