Archives 2005 - 2019    Search

SimpleLink - A bluetooth LE sensors module from TI

published Dec 26, 2015 06:51   by admin ( last modified Dec 26, 2015 06:51 )

Seems to be a lot less hassle than soldering your own unit.

 

10 sensors including support for light, digital microphone, magnetic sensor, humidity, pressure, accelerometer, gyroscope, magnetometer, object temperature, and ambient temperature


Read more: Link - SimpleLink™ Bluetooth Smart®/Multi-Standard SensorTag - CC2650STK - TI Tool Folder


Use "nopack" for local x2go connections on fast networks

published Dec 26, 2015 04:35   by admin ( last modified Dec 27, 2015 05:48 )

I tested a couple of different compression and encoding methods (as they call them) with an x2go client running on a Raspberry pi B+ running Raspbian. It is connected via 100mb/s Ethernet to an early Core2Duo server running Ubuntu 15.04  with 8gigs of RAM. The desktop is LXDE.

I entered this url:

http://www.kjell.com/se/sok?query=rel%C3%A4

In the Firefox browser on the server machine. Firefox has funky scroll settings and by fiddling with them (disabling smooth scrolling and hardware acceleration among other things) one can get a much faster scroll under x2go. I did not do that fiddling before doing the tests. However the below data I believe still holds for the speed of trying to render Firefox in an x2go unfriendly configuration and hence the general rendering speed on the client side.

I scrolled to the bottom of that page and hit "top" on the keyboard to make the browser scroll to the top of the page. For some compression and encoding settings it could take 5 seconds. With nopack it took 2 seconds (roughly, timed with the ticks from my wristwatch).

So, nopack is best, But 2 seconds on a local network is still unacceptable while browsing. On my two year old Intel 4200U i5 laptop it is instantaneous.

With an insecure XDMCP connection between the two same computers (Pi, C2D) using Xephyr in Remmina on the client and lightdm on the server, the scroll takes 1 second. Using the native Firefoc browser on the Raspberry pi, it takes 1.5 seconds, so maybe the B+ just isn't cut out to be a computer you can browse smoothly on with a big screen, anyway you slice it.

Update: You can configure the Raspberry Pi to use more of its memory for graphics. It is not clear to me yet if this will improve performance, but it will look better subjectively at least. A discussion on how to configure graphics memory is here:

Raspberry pi 2 1024M Increase Gpu Memory to 512 at least - Raspberry Pi Stack Exchange

Reference info here:

RPiconfig - eLinux.org

And as mentioned above, changing the scroll settings in Firefox helped a lot. In fact you get it down to close to instantaneous in x2go. Will try to update on the settings that helped with that.


How to use a Raspberry Pi B+ as an x2go client for remote desktopping to Ubuntu

published Dec 23, 2015 12:35   by admin ( last modified Dec 26, 2015 08:12 )
  1. Install Raspbian on the raspberry Pi B+.
  2. sudo apt-get install x2goclient
  3. Make sure the server that you control accepts the crypto that the Raspbian version of x2goclient needs (a security downgrade) Kex error: Get x2goclient to work on Raspberry pi
  4. Install LXDE on the Ubuntu server to be controlled, most other desktop environments on Ubuntu blackscreen in x2goclient. Black screen: Get x2goclient to work with a Ubuntu x2go server — jorgenmodin.net
  5. Use the "nopack" setting (i.e. no compression) for local x2go connections on fast networks, and fiddle with the memory reserved for the graphics

Black screen: Get x2goclient to work with a Ubuntu x2go server

published Dec 22, 2015 03:45   by admin ( last modified Dec 25, 2015 08:03 )

I got a black screen when trying to use x2go between two Ubuntu 15.04 machines. However by installing the MATE desktop environment on the server I got it working. Had no luck with the Unity and Gnome options.

sudo apt-get install mate-desktop-environment

If you're on Raspbian (Debian for Raspberry pi) the version of x2goclient there does not support MATE. However LXDE/Lubuntu seems to work fine in my tests as an alternative on the server side.

sudo apt-get install lubuntu-desktop

According to doc:de-compat [X2Go - everywhere@home] these desktop environments work:

  • Lxde (not Lxqt)
  • Xfce
  • Mate
  • Icewm
  • Openbox
  • Gnome <= 3.8 can work with workarounds, anything above not recommended
  • KDE 3 and 4 supposed to work, but not KDE 5.

Kex error: Get x2goclient to work on Raspberry pi

published Dec 22, 2015 02:25   by admin ( last modified Dec 22, 2015 02:25 )

The Raspbian version of X2GoClient is too old to connect to an X2GoServer for Ubuntu 15.04:

wiki:repositories:start [X2Go - everywhere@home] - Newer / more packages are not available currently. X2Go's upstream package archive does not include Raspbian packages.

You can get the Raspbian x2goclient to work again. You need to configure the ssh server (Ubuntu 15.04 in my case) where the x2goserver resides to accept deprecated (and hence somewhat insecure) ciphers. This line added to my sshd_config made it possible:

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1

Do note that it should all be on one line, especially the "KexAlgorithms" should not be on its own line. Because then ssh logins get disabled wholesale, I am privy to know.

And you will need to delete the server from the ~/.ssh/known_hosts file, or x2goclient will refuse to connect.


VNC, Remmina, X2Go: Remote desktop options for Linux

published Dec 22, 2015 02:10   by admin ( last modified Dec 22, 2015 03:47 )

I have used VNC for many, many years - but I am now experimenting with x2go to see if it can give better performance when the client is run from a Raspberry pi, and over remote connections.

VNC

There are many varieties of VNC clients and servers. TightVNC is in the Ubuntu repositories, as is Xvnc. TigerVNC should allow changing the resolution of the desktop on the fly.

Some advantages of VNC (points below taken almost verbatim from here):

  • Can be integrated into a hypervisor. It is integrated into KVM, Xen, VMware Workstation. This enables administering a VM before the OS is installed, and you can administer a VM with no network connection. You connect to the host's IP address, not the VM's IP address.
  • When properly setup with VirtualGL, TurboVNC can take advantage of a server's graphics card for 3D rendering.
  • VNC has support for 3D/compositing desktops such as GNOME3.

 

X2Go

X2Go seems to be an offshoot from Nomachine NX, which is a high-performance-to-bandwidth remote desktop solution used by large companies such as Ericsson.

I am currently testing X2Go, and actually sharing the screen from a VNC-started X session that has its own desktop on the server. The reason for this is that the server does not have a monitor connected to it and I already have a VNC server running there with its own desktop. There are other ways to do this on a headless server, mentioned here: Running a lightweight GUI on your vps via X2Go - Tutorials and Guides - vpsBoard. I have now tested a proper x2go connection to a Ubuntu server running the MATE desktop environment, much snappier.

start [X2Go - everywhere@home]

Advantages of X2Go (points below taken almost verbatim from the same place):

  •  Audio support
  • Folder Sharing
  • Printer sharing
  • High performance for 2D desktop usage
  • Works natively over SSH

Update: X2Go has this propensity to disappoint me, eventually. Last time I tried it it segfaulted. That did not happen this time. Right now it is instead because the Raspbian version of X2GoClient is too old to connect to the modern version of X2GoServer for Ubuntu 15.04:

wiki:repositories:start [X2Go - everywhere@home] - Newer / more packages are not available currently. X2Go's upstream package archive does not include Raspbian packages.

Still, it works from laptops, which is half the solution I wanted, at least. Will try to see if it is possible to somehow make Raspbian x2goclient play nice.

Update: You can get the Raspbian x2goclient to work again! You need to configure the ssh server (Ubuntu 15.04 in my case) where x2goserver resides to accept deprecated (and hence somewhat insecure) ciphers. This line added to my sshd_config made it possible:

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1

Do note that it should all be on one line, especially the "KexAlgorithms" should not be on its own line. Because then ssh logins get disabled wholesale, I am privy to know.

And you will need to delete the server from the ~/.ssh/known_hosts file, or x2goclient will refuse to connect.

Remmina

Untested by me so far. Seems to be able to speak many protocols and doesn't propose a new one. The web site seems to not have been updated in one year, but the there is recent activity in the source code repository. It should be able to connect to both VNC and Nomachine NX, among other protocols.

FreeRDP/Remmina

"Remmina is a remote desktop client written in GTK+, aiming to be useful for system administrators and travellers, who need to work with lots of remote computers in front of either large monitors or tiny netbooks. Remmina supports multiple network protocols in an integrated and consistent user interface. Currently RDP, VNC, NX, XDMCP and SSH are supported."


How to open a WD My Cloud case & extract the HD

published Dec 21, 2015 03:25   by admin ( last modified Dec 21, 2015 03:24 )

The video linked below worked like a charm for me to open the case on a Western Digital My Cloud drive:

Read more: Link - How to open a WD My Cloud Case - YouTube

(Disclaimer: I take no responsibility for any damages by following these instructions)

I used more than one credit card so that there was always a card still inserted on the other side.

In order to extract the hard drive, you can tilt it carefully so that you can access the screws that go through two of the rubber bushings that suspends the hard drive in the case. On my drive the screw heads were of an unusual standard, I think it is called "Torx" and luckily I had torx bits fitting.

Then remove the screws that mount the circuit board to the hard drive, and now you should be able to turn out the hard drive in the opposite direction of the circuit board. Use the torx screwdriver to remove the bushings on the other side of the drive, and then you need a hex screwdriver to remove the metal thingamajig that is on that side and now you should be left with a hard drive e.g. suitable for mounting in another case, or to be replaced with another drive.

Mine is now humming along in a USB 3.0 case, where the USB port is in slave mode opposed to the not-so-useful master mode on the WD My Cloud.


Pronouncing numbers with powers of ten in them in English

published Dec 21, 2015 02:12   by admin ( last modified Dec 21, 2015 02:12 )

As you probably all now in Swedish 4*10⁵ is pronounced "Fyra gånger tio upphöjt till fem" :) .

But what is it called in English? "Four times ten raised to the power of five" is kind of long. Apparently you can say like this:

"Four by ten to the five"

 

Normally I'd say five by ten to the five and two by ten to the eight.


Read more: Link - How to pronounce 5x10^5, e.g. | WordReference Forums


Forking and joining a javascript promise chain

published Dec 17, 2015 09:30   by admin ( last modified Dec 17, 2015 10:52 )

So, I have this promise chains in Bluebird that works like a pipeline: The data comes in in one end and then gets processed and pops out in the other end. Although you actually never leave a promise chain once you're in it. So it doesn't actually pop out, but inside the last step a function call is made to a function outside of the chain.

Now, new directions are given to the developer: Besides the data which is the input of the promise chain, the chain needs to split up a bit temporarily and join again. During the split up the parallel pipelines needs to process slightly different data from each other, as defined in an array. Well, how do you do that?

It's not only a question of just doing work in parallel, since each line has one parameter now that is different from other lines. Here is one way that seems to work: You inject the array into the this object with bind and then have one step that takes that array, and makes a new array with each array value combined with all the input data.

var P = require("bluebird")

var colors =['foo', 'bar'] // the different stuff

// "0" below stands for some kind of input data
P.resolve(0).bind({colors:colors}).then(forkIt).map(remainingStuff).then(loggIt).done()

function loggIt(stuff){
    console.log("logging")
    console.log(stuff)

    }

function forkIt(stuff){
    var ret = []
    for (color of this.colors){
        ret.push({'color':color, value:stuff})
    }
    return ret
    }


function remainingStuff(stuff){
    return P.resolve(stuff).then(baz)
    }

function baz(bletch) {
     return bletch
    }

It would be even nicer if you then could use bind in each line to have this have  different contents per parallel line. But in my initial tests, this is global to the all lines so that does not work. Which can be seen of you add an additional step called bat:

function remainingStuff(stuff){
    return P.resolve(stuff).then(baz).then(bat)
    }

function baz(bletch) {
    this.color = bletch.color
     return bletch
    }

function bat(bletch) {
    console.log(this.color + bletch.color)
     return bletch
    }

 

Which will make the code print among other things:

barfoo
barbar

Instead of the desired:

foofoo
barbar

Countersigning needed for digital signatures to go big

published Dec 11, 2015 05:55   by admin ( last modified Dec 12, 2015 01:37 )

In fact, you should have a pre-defined list of public keys that needs to be used and countersign your signature for your signature to be valid

A big problem with digital signatures (sometimes called electronic signatures) is that somebody can steal the signature making device and sign away on stuff, pretending to be you.

One way of mitigating that is to have your signature only to be valid if other people countersign it. For any contract that isn't just a small purchase, it is worth the extra effort to call up some people who trust you and ask them to countersign, or meet up in person and do that. Wikipedia - Countersign (legal).

In fact, you should have a pre-defined list of public keys that needs to be used and countersign your signature for your signature to be valid. At least m of n of them must be used, and if one of them gets stolen then as long as the original owner still has access to it, there would be enough redundancy that it does not matter as much (but that key should be marked as stolen of course and there should be a threshold for how many such keys can be used).


Web server proxies, as work queues with apps pulling requests

published Dec 03, 2015 03:55   by admin ( last modified Dec 12, 2015 12:06 )

One thing I have thought about for a while is to reverse how load balancers work. Today to my understanding, they receive web requests and then distribute them across an array of application servers. The proxy can measure the response time and as such can decide which server to send the next request to, or it can use some kind of algorithm, such as round-robin.

But the component that knows best when it is ready to process a new request is in a way the app server itself. So why not let the proxy maintain a queue of incoming requests and then have the app servers pull requests from the queue?

Having such an architecture leads to another interesting feature, that app servers can be created and then simply start pulling work from the proxy, without the proxy needing to be reconfigured to accommodate for the new instance of an app server. If using SSL certificates, adding app servers could just be a question of firing up new instances and having them poll the queue while blocking until they get a job from the queue.

Furthermore the proxy can, if an app server is slow to process a job, serve out an intermediate response, that tells the client to continue polling (e.g. through JSON-RPC) until the result comes through. In this way you can work around fronting server timeouts.

So if a request becomes a "job", caching can be memoizing a job, and the proxy can decide whether to serve a cached version or wait for a job to finish. If a certain job with a memoized signature is slow to respond, its caching can be increased. If an app server serves jobs much slower than other ones, it can be put out of commission.

However an app server may benefit from serving several requests concurrently due to I/O waiting and in that case the setup gets a bit more complicated.

Update: I've got a reply on Reddit pointing out some problems with the above design


Running node.js applications under Java - initial check on what's available

published Dec 03, 2015 03:35   by admin ( last modified Dec 25, 2015 09:19 )

Apparently the new version of the javascript implementation on the JVM - Nashorn - is quite fast. It would be interesting if one could move Node.js apps from Node onto the JVM. Here are some examples purportedly to work under the JVM:

https://www.dropbox.com/s/xdna3pvgf5rjqu9/nashorn-demos.zip?dl=0

The example using express.js seems to use avatar.js, which is not 100% compatible with node.js according to their own documentation.

Still, being able to integrate Node.js apps into the JVM is interesting. Even more interesting might be if t were possible to leverage the multi threading of the JVM through e.g. some language extensions to Javascript on Nashorn.

Frequently asked questions about Nashorn and Node.js | Niko Köbler – Software-Architect, Developer & Trainer

Update: It seems like the Avatar.js project is now defunkt. A blog post from 2015-02-12:

 

We'd like to thank those who have provided us feedback throughout the life of Project Avatar. It has been very much appreciated and helped us us more than you know.
 
Avatar and Avatar.js project pages, along with the code and related binaries, will remain available for folks to learn from and leverage.

Gitwatch

published Nov 27, 2015 11:51   by admin ( last modified Nov 27, 2015 11:51 )

A bash script to watch a file or folder and commit changes to a git repo

 

A way to watch a file and back it up to git on every save.


How to override the reported hostname in Ganglia

published Nov 23, 2015 01:52   by admin ( last modified Nov 23, 2015 01:52 )

In the global section, put:

override_hostname = your.preferred.hostname


yEd - a very nice charting tool

published Nov 19, 2015 07:30   by admin ( last modified Dec 03, 2015 02:49 )

Written in Java and hence might run on many operating systems. Really nicely done. You get good results quickly.

Example (I suck at graphics though):


Keep your tmux sessions although client has upgraded

published Nov 19, 2015 04:59   by admin ( last modified Nov 19, 2015 04:59 )

I started an upgrade of a machine and decided to do it over tmux so that losing the SSH connection would be no biggie. However when I came back I wanted to reattach to the tmux session and couldn't; the client had been upgraded so it wouldn't connect to the old sessions. Then you can do like this:

pgrep tmux
<process id>
/proc/<process id>/exe attach



 

Pretty awesome hack, if you need your tmux working and not want to lose all your sessions:


Read more: Link - tmux - protocol version mismatch (client 8, server 6) when trying to upgrade - Unix & Linux Stack Exchange


"inner-dest plugin username not found": You need to use 3.6+ of syslog-ng

published Nov 16, 2015 01:31   by admin ( last modified Nov 16, 2015 01:31 )

If you get the message:

Error parsing afmongodb, inner-dest plugin username not found

in syslog-ng when trying to connect to mongodb, it is likely that you are running a version of syslog-ng that has support for mongodb, but not support for mongodb authentication. An example of such a versions is 3.5.3-1, which is as of this writing the one in the standard repositories for Ubuntu 14.04LTS. Version 3.6+ are supposed to have support for mongo authentication.

Revision: 3.5.3-1 [@9695e81] (Ubuntu/14.04)"

Open source logging, analysis and monitoring tools

published Nov 14, 2015 12:20   by admin ( last modified Nov 14, 2015 12:19 )

logo transparent 200x75  kibana flume logosyslog ng rsylogmunin rrdtool 3dlogomongodb graphite postgresql elasticsearchinfluxdb    riak riemann ganglia   pcp logo fluentd logo collectd  statsdlogstash

An attempt to structure what open source logging and monitoring tools are available. I've just started checking out this area.

This article will first put structure to logging and monitoring needs and then list what is available, with short descriptions, categorized.

The use case for my analysis is an outfit of a dozen or so publically reachable machines, with in-house custom built services reachable over http with Rest, JSON-RPC and as web pages. Supporting these services there are database servers holding hundreds of Gigabytes of data, and a couple of other servers specific to the business.

A high-level overview of the field may look like this:

Logs and metrics -> Aggregation -> Monitoring -> Notification

Logs and metrics -> Aggregation -> Storage ->  Log analysis

Availability, sanity and fixing

So, why should you monitor servers and log data from them? It could be divided into ensuring the availability of your systems, the sanity of your systems and fixing the systems:

Availability (Monitoring)

Are the servers on-line and the components working? How would you know? You could have:

  • Alarms sent when the monitoring system detects services not working at all or other critical conditions
  • As an aside, you could also consider a bit of "monitor-less monitoring" -Let the customers do the monitoring,  and have a way for them to quickly indicate that something isn't running smoothly. For example as a form that submits the problem with automatic indication of what machine/service that message comes from, or just a text with indication of where to file a ticket.
  • There is probably a minimum good set of monitor info you want from the system in general: CPU, memory, disk space, open file descriptors.
  • There should be a place where you can see graphs of the last seven days of monitoring output.
  • Monitoring of application-level services, such as those running under a process manager such as pm2 or supervisord. At a minimum memory consumption per process and status

Articles

Sanity (Monitoring)

Even if a a system is available and responding to the customer's actions, it may not be accurate.

  • No instrumentation needed for this on the servers, simply monitor services from another machine, make http requests. Check response time and accuracy of result. Will also catch network connectivity issues. This is similar to end-to-end tests, integration tests and regression tests, but on live data.

Fixing (Logging)

  • Why did the problem come about? - Traceback and error logging, comparing logs from different subsystems. There ought to be ready-made instrumentation for services used on the machine: PostgreSQL, MongoDB, Nginx and such. It is important to make sure the systems log enough info, especially your own software. If the space requirements get big, be aggressive with log rotation. There are a number of standardized log formats:

Standardized log records

There are a couple of standards with regards to the format of log records. I believe RFC 5424 is more modern than RFC 3164, and Gelf is becoming a bit of a de facto standard in newer systems (Graylog, log4j), with log data encoded in JSON.

Logging

Logging should answer the questions, in increasing order of ambition:

  • When and for how long? - When did the problem occur and how long did it persist?
  • How? - How did the problem manifest itself, i.e. out of memory, out of file descriptors
  • Why? - Why did the problem come about?

Data interfaces/aggregators

Ok, going back to the diagrams:

Logs and metrics -> Aggregation -> Monitoring -> Notification

Logs and metrics -> Aggregation -> Storage ->  Log analysis

Firstly data needs to made available for aggregation. In some cases it is about making accessible log messages that are already produced. In other cases it means introducing new data collecting services (metrics).

Writing logs to a log file that nobody knows about does not count as making data available. However writing to a well known logging service makes data available. A process that finds log files and then reads from them also makes data available.

Data interfaces/aggregators software

Articles

Analyzers and visualizers - monitoring

After you have the data, you may want to monitor and react to events and unusual circumstances. A monitoring tool can react when thresholds are reached and often can calculate and compare values, also over some (limited) time. There are also possibilities to do visualizations.

Analyzers and visualizers -logging

There are basically two kinds of analysis: One of time series data where graphs are of help, the other is events such as errors which is more texual data

  1. Store numeric time-series data
  2. Render graphs of this data on demand

Articles

Protocol brokers

These translate one protocol into another, or aggregates (which makes them a bit like a category further up). The contents of this category is just a selection of some that I found and are mostly for inspiration if/when I need to fit pieces together that may need some programming/adapting.

Storage back ends

Usually bundled in or required by log analyzers

  • postgresql  PostgreSQL
  • mongodb MongoDB
  • influxdb InfluxDB - Event database
  • RRD - Round robin database
  • Whisper - part of Graphite. In its turn uses storage back ends.

Mega systems - all in one

 

 


Installing syslog-ng on Ubuntu 14.04 LTS

published Nov 13, 2015 12:10   by admin ( last modified Nov 13, 2015 12:10 )

You need to explicitly install syslog-ng-core

 

I think this workaround should do it: apt-get install syslog-ng syslog-ng-core


Read more: Link - Bug #1242173 “syslog-ng package fails to install” : Bugs : syslog-ng package : Ubuntu


jq: How to filter an array and add suffixes to values

published Nov 11, 2015 03:36   by admin ( last modified Nov 11, 2015 03:36 )

Given the output of the jlist CLI command from pm2, you could filter it like this:

jq '.[]|{service:(.name| . += "memory"),  mem: .monit.memory},{ cpu: .monit.cpu, status: .pm2_env.status}'

Explanation

.[] means for each object in array, basically. The pipe afterwards will be executed once per array object

(.name| . += "memory") means pipe value of the name key to the next function, represent it there with a dot, and add the string "memory" to it