Sunday 7 October 2018

Setting up OpenSuse on a Dell XPS 15 laptop model 9570 (2018)

After nearly a month of waiting I took delivery of a new Dell XPS 15 laptop model 9570 In July 2018. I use the OpenSUSE Linux distributions but am also learning C++ on Microsoft Visual Studio so I want to keep the Windows 10 installation.

What I should have done at first was go into the BIOS (F12 on the round Dell boot logo) and change the BIOS settings from Raid to AHCI. By default the laptop comes in Raid mode but the Linux kernel refuses to see the disk (at least out of the box.) I had lots of trouble because of this setting as Windows deployed itself using the Raid drivers which can't deal with AHCI mode when you switch it later. I ended up with a practically unbootable machine several times.

So I think if I had switched to AHCI mode from the start Windows 10 would have set itself up using the correct drivers.

After Windows 10 has set itself up and claimed all the hard disk use DiskManager to shrink Windows down and create space for your Linux partitions. "THINK" is not a four letter word and you want to plan your partition sizes carefully.

In the Bios "Secure Boot" needs to be off and "Legacy Boot" needs to be on to start the Linux installer from the USB stick.

I then had trouble with the EFI partition. On this generation of Laptops the bootloader needs to be Grub2-EFI.

OpenSUSE tells you not to use the Mesa-dri-nouveau especially not with KDE as it can lead to crashes and hangs! Install the NVIDIA drivers instead. https://en.opensuse.org/SDB:NVIDIA_drivers

Pending issues:
OpenSUSE Tumbleweed doesn't shut down properly and the computer just hangs on shutdown. I've recently upgraded the Bios from 1.3 to 1.41 but haven't observed an improvement. I have to press the prower button for 20 seconds or so to shut down. The Nvidia driver doesn't seem to be helping much here either.


Useful links:
How to boot from a live medium and chroot onto the installed partition: https://forums.opensuse.org/content.php/146-Using-a-LiveCD-to-take-over-repair-an-installed-system

https://forums.opensuse.org/showthread.php/528400-Repair-a-broken-UEFI-GRUB2-openSUSE-boot-scenario

Thread at Dell support about Raid and AHCI: https://www.dell.com/community/Laptops-General/Dell-M-2-FAQ-regarding-AHCI-vs-RAID-ON-Storage-Drivers-M-2-Lanes/td-p/5072571/page/3


Update April 2019

OpenSUSE Tumbleweed have upgraded to Kernel 5. I am most please to report that suspend and hibernate suddenly show up as options in KDE and work splendidly! These functions have always been a pain and I am super happy that this now works!

Also I discovered that KDE had a zoom in and zoom out function for the desktop which is also working perfectly.

The hanging shutdown where the shutdown would take like 20 seconds to actually shut off the power seems to have also been fixed now.

Very impressed!

Saturday 16 April 2016

Graphical C++ program in the Codenvy cloud

Graphical FLTK based C++ program in the Codenvy cloud with CMAKE

(Updated in October 2018 to reflect the changes in Codenvy and the fact that the X-Window is gone)
(Update Nov 2019 Codeenvy lives on as https://che.openshift.io )

Let's draw some Acid-Smileys on an FLTK window from a C++ program compiled with gcc using the CMake build tool.

You might want to read my earlier Blog about running a C++ Hello World with Codenvy if you are not familiar with Codenvy.



Step 1: Create the project by cloning this Git repo:

git@github.com:richardeigenmann/CppAcidSmileys.git

The code is based on sample code and a learning exercise from Bjarne Stroustrup's Programming -- Principles and Practice Using C++

Step 2: Add the build tools and libraries

The project requires some tools to be installed in the build envrionment like cmake. Go to the Terminal and type the following:

sudo add-apt-repository ppa:george-edison55/cmake-3.x
sudo apt-get update
sudo apt-get install -y cmake libfltk1.3-dev



Step 3 Compile

Go to the Terminal and runn the following:

cd /projects
cd CppAcidSmileys
mkdir build
cd build
cmake ..
make

you now have the binary ClassedAcidSmiley but when you run it, it doesn't have a DISPLAY:


Step 4 There used to be a built-in X-Desktop 

But it was removed. Also there used to be a VNC facility.
As of October 2018 I can't figure out how to bring this back.

Step 5 Copy to local

We can use our ssh trust to copy the compiled file down to our local machine:

scp -P 56219 user@node10.codenvy.io:/projects/CppAcidSmileys/build/ClassedAcidSmiley .

Of course you need to put in the correct port and node. Notice that ssh uses a lowercase -p and scp uses an uppercase -P

You can then run the file locally.

Sunday 21 February 2016

Run Cyrus IMAPD mailserver from a Docker container with the mailbox data on an external volume

The Objective:

Run an Imap server on the local Linux machine in a way that it is easy to move from one computer to the next.

The use case: I want to run the imap server on my desktop most of the time but when I go away I want to take it along with me on my laptop.

The first solution was to set up a headless virtual machine with VirtualBox and install the cyrus imap server into the virtual machine. This works and the .vdi disk images can be moved from one computer to the next. The downside is that the disk image is around 19GB in size which takes hours to copy. Also running the VM on the Laptop reduces memory and degrades performance.

The Docker solution:

Build a docker container from the latest OpenSuse image and install cyrus imapd into it. Since containers "forget" all changes when the are shut down we use a VOLUME to persist the database and mail data on the host filesystem. The host directory with the cyrus data can then by rsynced to the new machine and the container can be started there. The mail client finds the impad server on localhost:143.

The Dockerfile:

FROM opensuse:42.1

ENV mailboxuser richi
ENV mailboxpassword password

MAINTAINER Richard Eigenmann 

USER root

# add the packages needed for the cyrus server and some to work with the shell
RUN zypper --non-interactive in \
  cyrus-imapd \
  cyradm \
  cyrus-sasl-saslauthd \
  cyrus-sasl-digestmd5 \
  cyrus-sasl-crammd5 \
  sudo less \
  telnet;

# set up the saslauthd accounts (complication: the host name changes all the time!)
# -u cyrus ensures the account is set up for the hostname cyrus
# cyrus is the account we need to run the cyradm commands
RUN echo ${mailboxpassword} | saslpasswd2 -p -u cyrus -c ${mailboxuser}
RUN echo "password" | saslpasswd2 -p -u cyrus -c cyrus
RUN chgrp mail /etc/sasldb2
RUN chsh -s /bin/bash cyrus


# Set up the mailboxes by starting the cyrus imap daemon, calling up cyradm
# and running the create mailbox commands.

# Step 1: set up a sasl password valid under the build hostname (no -u param).
# Since sasl cares about the hostname the validation doesn't work on the above
# passwords with the -u cyrus hostname.

RUN echo "password" | saslpasswd2 -p -c cyrus

# Step 2: We can't use here-documents in docker so we create the instructions
# that cyradm needs to execute in a text file

RUN echo -e "createmailbox user.${mailboxuser}\ncreatemailbox user.${mailboxuser}.Archive\nexit" > /createmailbox.commands

# Step 3: Start the daemon and in the same build container run the cyradm command
# (note the ; \  at the end of the line!)

RUN /sbin/startproc -p /var/run/cyrus-master.pid /usr/lib/cyrus/bin/master -d; \
sudo -u cyrus -i cyradm --user cyrus -w password localhost < /createmailbox.commands; \
mv /createmailbox.commands /createmailbox.commands.completed;


# create a file startup.sh in the root directory
RUN echo -e "#!/bin/bash\n"\
"if [ -e /var/dostart.semaphore ]; then\n"\
"chown -R cyrus:mail /var/spool/imap /var/lib/imap\n"\
"/usr/lib/cyrus/bin/master -d\n"\
"sleep .6\n"\
"ps u --user cyrus\n"\
"fi"\
> /startup.sh; \
chmod +x /startup.sh


# start the cyrus server and a shell
CMD  /startup.sh; /bin/bash

Running the server:

Build the container:
docker build -t richi/cyrus-docker:latest .

Do these steps to set up the mail server and the host directory: Build the container:
# on the host server
mkdir /absolute/path/to/the/exported/directory/var
docker run -it --rm --hostname cyrus -v /absolute/path/to/the/exported/directory/var:/mnt richi/cyrus-docker:latest

# inside the container 
cp -r /var/* /mnt
touch /mnt/dostart.semaphore

All subsequent runs:
docker run -it --rm --hostname cyrus -p 143:143 -v /absolute/path/to/the/exported/directory/var:/var --log-driver=journald richi/cyrus-docker:latest

Testing:

telnet localhost 143

#should result in output like this:

Connected to localhost.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ ID ENABLE LOGINDISABLED AUTH=DIGEST-MD5 AUTH=CRAM-MD5 SASL-IR] cyrus Cyrus IMAP v2.4.18 server ready


Discussion:

Setting up the basic container and adding the cyrus software is straight forward.

Setting up the mailbox user account with the password and creating the mailbox structure is tricky: cyrus uses saslauthd to check the passwords of the users logging in. Saslauthd has some sort of anti-tamper mechanism that leverages the hostname in the validation. Since the Docker build process changes the hostname at every step this gets problematic. The saslpasswd2 -u cyrus statements set the passwords for the user account and the cyrus admin account for the hostname cyrus (the -u).

To set up a mailbox account cyrus requires the daemon to be running. The user cyrus then needs to run the cyradm command with the instructions to create the mailbox. Here documents don't seem to be supported inside Dockerfiles so we first create a script file "createmailbox.commands". We then use sudo to promote to the cyrus account and then pipe in the instructions from the script file.

This creates a Docker container that can start up and knows the user, his password and has the basic mailbox structure. You can point your mail client at this imap server and things will work fine until you restart the container. The container will forget all changes when it is shut down. Since cyrus impad stores all state in the /var directory a solution is to export the var directory to the host filesystem so that it can be easily transported to other computers as well as backed up. The -v parameter in the docker run command does just this.

The syntax of the -v parameter is the absolute (!) path of the directory on the left gets mounted to the directory on the right of the colon. Annoyingly, if you just use the -v bind-mount parameter the previous contents of the /var directory in the container are hidden and you just see the empty /var directory from the host filesystem. There doesn't appear to be a way to bind-mount the host directory so that all the obfuscated directories and files from the container "shine through" and all new writes go to the bind-mounted directory.

Therefore we must copy all the content in the container's /var to the host directory first. The way I suggest doing this is to start the container and bind-mound the host's directory to /mnt in the container. Then a cp -r can copy all content from /var to the new directory. After shutting down the container and starting it up with the directory mounted to /var we are back to the original view.

But not quite: The important directories for cyrus, /var/lib/imap and /var/spool/imap, used to be owned by cyrus:mail but are owned by root after the volume mount. Since the server feels it can't read the mailbox database if it is root owned we need to correct this before the startup. I have thus created a startup.sh script that fixed the ownership of the mounted host directory and then starts the daemon. To keep everything in one Dockerfile I create the startup stript with an echo statement right inside the Dockerfile.

To facilitate rsyncing from one host to the other I suggest chown -R user:users on the host directory. Docker runs as root and will create all new files as root owned files but can perfectly well read and write to user owned files. Userspace synchronisation tools will find it much easier to deal with user owned files, however.

Monday 14 September 2015

Running c++ Hello World in the cloud: Codenvy

Running Hello World in Codenvy

(Updated October 2018 to reflect the changes in Codenvy)

Go to Codenvy


Sign Up or Login via the "GET STARTED" link.

Create a new Workspace. Give it a name like "CPP" and pick the stack C++. And Create the workspace.



To create the Hello World program click on "Workspace > Create Project..." Fill in a Name and description for the project and click Create.


Now expand the project in the Projects Explorer. You have a hello.cpp file. Double click on it and it already has everything to print Hello World!


The run command (the blue triangle top right) is wired to call make which expects a Makefile with the recipe to run the C++ compiler. Makefiles have annoying demands on tabs in them. We have to tell the editor to keep the tabs. That means truning off Tab expansion. Go to Profile > Preferences > IDE > Editor > Expand Tab and turn the option off:



Now let's create the Makefile with the following content. Note the tab before the g++.

all:
g++ -std=gnu++11 hello.cpp

# Note that Makefiles require a tab before the g++ 
# It probably doesn't copy/paste well from this blog post

Create a new file (Project > New > File) with the name Makefile and paste the above lines. Make sure to change the space before the g++ to a tab!


Now you are ready to run the Hello World program. Click on the blue triangle and execute



Didn't run so great... We got a "run" tab in the Processes section at the bottom of the screen. And a nice big error. Looks like the C++ code is a bit off! Double click the hello.cpp to open up the file in the editor. Change the include statement on line 3 from iostream.h to iostream. Also cout is in the namespace std which isn't mentioned so prefix it with std:: Then hit the triangle and it works!




Interactive shell

That was fun but can we have an interactive shell? How can we make the program prompt the user for her name and play it back? We need a different runner!

Let's see. Here is an interactive C++ program:

#include <iostream>
#include <string>

int main()
{
   std::cout << "Your name please: ";
   std::string name;
   std::cin >> name;
   std::cout << "Hello: " << name << "!\n";
   return 0;
}


Let's run it with the blue triangle:


Looks like it used /dev/null as input. Not cool.

But we have the "Terminal". Let's use that and do the following steps:
Much better!

SSH

Assuming you are on Linux and have run ssh-keygen then you have your public key in .ssh/id_rsa.pub Upload this to Codenvy. Go to Profile > Preferences > SSH > Machine > Upload Key and upload your public key. The Workspace then needs to be restarted Workspace > Stop > Start 

In the Processes panel, next to the dev-machine there is the word SSH. Click on that and you receive connection information:


You can now cut&paste the ssh connection command into your local console. Then cd to /projects and you can run your program:










Sunday 9 March 2014

Don't call it "new"! But calling it "old" is OK.

Have you ever seen a directory with files names like this?














Which is the current one and which are older working copies?

The super organised person would name the files like this:














For the rest, let me recommend calling the files "old" as you have no trouble picking out the latest version here:















Of course that means you need to save the changes in 2 steps. You will have to save the new version of the document under a temporary name close the document, go to the explorer and rename 2 files. The effort is worth it!

Monday 1 April 2013

Best advice I ever got

The best advice I ever got came from Renée Watkins. She gave me hard time over some software I had written to book FX trades. She kept asking me for detail upon detail and I just didn't know all the answers. Eventually she recommended that I ask WHY?  It took this to heart and it has helped me no end! If you don't know why something is supposed to work this way or that then whatever you code will not fit the expectations of your user.

It also ties in with another favourite from work: "No surprises ". If you ask enough probing questions then you will understand the problem being solved and will avoid many unpleasant surprises.

Backups

OK, you say, I get it, we should back up our data! And then you make a half hearted attempt and move on. But deep down you know about MTBF, the Mean Time Between Failures. The one thing we can say for certain about mechanical systems (such as your Hard Disk) is: IT WILL FAIL. The MTBF might give you the confidence that "my hard disk is likely to go on for another 3 years" and I sure hope it does. And when it does fail, often its not completely dead and you can get much of your data off it...

So my suggestion is to keep your data fully replicated on multiple devices. In order to do this easily I find it best to have one single directory underneath which everything of importance goes. (Think about it: When your disk blows in 3 years your computer is old and you will replace it with the shiniest new one that your budget allows. It will have a new version of Windows on it with new versions of the applications you use (and icons all looking different and in unusual places.) You don't need a backup of the Operating System and the Programs; you just need a backup of your data. [Yes, a list of the programs you use will be helpful! Perhaps you should go off and create just such a list in Evernote right now?]

I suggest you have one directory on the root of the filesystem (say c:\) with the name of the person. Example:  c:\Tom

Then you need to consider what kind of data you have. Some of this will be insensitive such as eBooks, mp3s, movies whilst you may feel other data is somewhat more private in nature like your salary slips, tax filings, accounts or contracts. You can grant and revoke permissions at a directory level so I suggest you create the insensitive directories directly under the main directory and create a Private directory for the more sensitive stuff. I.e.:

c:\Tom\Mp3
c:\Tom\Pictures
c:\Tom\Movies
c:\Tom\ToDo
c:\Tom\Private
c:\Tom\Private\Taxes
c:\Tom\Private\Contracts
c:\Tom\Private\Contracts\HealthInsurance
c:\Tom\Private\Accounts

You need to decide where the pictures should go. You probably want to share them with friends and family so they would more likely go into the main directory than the "Private" directory. (If aunt Mathilda is sitting next to you do you really want to be clicking around in the "Private" directory?)

I find it very useful to have a "ToDo" folder. This is supposed to be empty but will take all temporary stuff that you haven't filed properly yet.

The goal is to have all your data somewhere under c:\Tom and nothing on your Desktop, nothing in c:\documents and settings\local user\My Pictures\ and other crazy locations. There is an added advantage to this because some programmers seem to think that they can freely create junk files in your "Documents and Settings" folder. You have no idea what these files are and don't know if you can delete them without breaking anything. By having your data in your own structure they can freely use those locations and you will just walk away from that pile of junk when you upgrade to your next computer.

Now you are ready to do something about your backups! In the simplest form you just copy the entire c:\Tom folder to an external Hard Disk. Buy a large one and call the copy something like "\Tom-Backup-2018-03-01" and the next one "\Tom-Backup-2018-04-01". This allows you to go back to an old backup if you discover a file was corrupted or you accidentally lost half the text of your thesis some time in March.

I own multiple computers and like to have the whole directory replicated to each machine (in the belief that not all disks will fail at the same time). The problem you get into is that between copies different files will be modified on each of the machines. You need clever software to figure out what files were modified on which machine so that the latest version can be copied over. My favourite software for this is Unison File Synchronizer. It works really fast on huge directories between two Linux machines and works well (but slower) when comparing two directories (one local, one remote) on Windows.

For backups I recommend Box Backup. This backup software looks for changes on the filesystem and encrypts the changes and uploads them to the backup server. By searching for the changes it doesn't have to upload all 100MB of the file, just the parts that actually changed. Because it stores the changes it can reconstruct a file from several changes back. Because it encrypts the data on the client the person running the server can't decrypt the data. It runs in the background and figures out what to do completely on it's own. The client comes for Linux and Windows whilst the server needs to run Linux. The downside is that it is difficult to set up (especially the bit with the cryptography keys). Also most home users are throttled on the uplink of their internet connection which makes backups very slow. At worst the Internet will seem slow because the page requests have to queue up behind large backup packets.

Update on 19 May 2013: http://freefilesync.sourceforge.net/ looks like an interesting alternative to Unison for directory synchronisation.