Sunday, 22 October 2017

Exporting and Processing High Dynamic Range (HDR) MiSphere Photos with Ubuntu




Introduction

Capturing 360 degree photos with the MiSphere 360 is very easy to do - the camera is just point-and-shoot, and the software on your mobile phone does all the rest.  However, if you wish to do some more clever things, such as generate high dynamic range images, the software is lacking.

This note shows how to take bracketted photos with the camera, and merge them together into a single image and convert to a rectilinear photograph.

Take three photos using the Exposure Compensation set to +0, +1.5 and -1.5

MiSphere Camera Exposure Compensation Settings
The following images are examples taken out of a window such that the inside is dark and outside is too bright.  These images will be used for the example in this post.
Three Bracketted Images
Transfer these to a PC.

Use Luminance HDR to generate a HDR image.


Load the three images into Luminance HDR.

The exposure compensation should be automatically detected, but if not, assign the +1.5 to the brightest, and -1.5 to the darkest of the three images.

Luminance HDR Load Images
Select next, and use the default profile to continue.
You will then be presented with the Luminance HDR main window.
Some important things to note here:
  • There are several different Tonemap operators you can select.
  • The Result size by default is tiny (256 x 128 in this example).
Start by selecting a result size of 6912 x 3456 (the same as the input image size).
Then, pick an image on the right that you think may be close to what you want.

I've preferred:
  • Drago
  • Manituk 08

Play around with the sliders, and click on Tonemap to update

When you've got something you like, hit Save As, and your image will be saved as a jpg.  Pick 100% (i.e. no compression) when you save.

Luminance HDR Drago Output

Use Hugin to convert the fisheye images to a rectilinear one

You may think that at this point, you can run the Mi Sphere Camera Windows program under Wine (it does work, by the way).  Unfortunately, it refuses to process images that have been tampered with since they left the camera.

For this, you can make use of a project file.  There are several available - I used the one available from: http://ez-team.com/xiaomi.html.

Copy the pto file into a directory, and open it with Hugin.  It will prompt you for the file, so navigate to the jpg/tif file that was output from Luminance HDR.

Hugin Panorama Stitching
I choose the advanced option, and then select Advanced Mode, and then, in the panorama preview, you can centre the image, and level the horizon.
Then, in the Panorama Stitcher, you can Calculate the Optimal size, and stitch - this will give you a rectilinear output image.

Example High Dynamic Range Output Image

The next steps

Once you have your image, you will probably want to share it.

Saturday, 8 July 2017

Setting up a Local Alexa Development Environment


This blog entry shows how to set up a local development environment for the Amazon Echo.  It doesn't provide information on how to write echo skills (that's reserved for other blog entries!).

This assumes that you've already got an echo, and have already got an Amazon account set up.

Download the System Tools (Ubuntu Shown)
$ sudo apt-get nodejs npm python
$ pip install --upgrade aws
$ pip install --upgrade awscli
$ sudo npm install -g lambda-local aws-lambda nodejs
Set Up an access key Pair

Log into the Amazon Console using your amazon account: https://console.aws.amazon.com/iam
Expand the first entry (regarding root keys), and select Manage Security Credentials.
In the Security Credentials, Create a New Access Key.

Either download the access key information, or leave this dialog box open so hat you can use the information later.

Note that you don't need to create any Keys (other parts of the IAM security pages).  Creating other keys will cause you to be invoiced for them!!!


Configure the tools for the User

Configure your AWS environment, using the access key and secret you previously created in the IAM console online.
$ aws configure
AWS Access Key ID:  ACCESS_KEY_ID
AWS Secret Access Key: SECRET_ACCESS_KEY
Default region name: eu-west-1
Default output format [None]: 
You can now Upload Lambda Applications

Creating a lambda-local application environment

Create an alexa subdirectory, and in it, create the following folders:
  • assets - This is where the images and utterances will go (used in the launching and publication of your skill)
  • bin - This is where your utilities / applications will go
  • dist - This is where your distribution snapshots will go
  • js - This is where your javascript will go
  • test - This is where your test json input files will go
In the bin folder, create three scripts:

bin/install
#!/bin/sh
cd "`dirname $0`/../js"
MODULES="`grep require *.js | grep -v \\./ | sed -e s/\\"/\\'/g | cut -d\\' -f2`"
npm install $MODULES --save
echo "Installed: $MODULES"
bin/test
#!/bin/sh
cd `dirname $0`/..
if [ -z $1 ]; then
  cd test
  echo "Usage: test event"
  echo "Valid Events: `ls *.json | sed -e 's/.json//g'`"
  exit
fi
lambda-local -p ~/.aws/credentials -l js -e "test/$1.json"
bin/upload
#!/bin/sh
cd `dirname $0`/..
HOMEDIR="`pwd`"
APPNAME="`basename $HOMEDIR`"
DATE="`date +%y%m%d%H%M%S`"
cd js
zip -r ../dist/$APPNAME-$DATE.zip . > /dev/null
cd ../dist
rm -f $APPNAME.zip
ln -s $APPNAME-$DATE.zip $APPNAME.zip
aws lambda update-function-code --function-name $APPNAME --zip-file fileb://$APPNAME.zip
 

You can now create your files in the js folder
run them with the test command
And upload the project to run on the alexa with the upload command

Notes

Make sure your test json files have a userId defined, otherwise scripts that use dynamoDB will fail to run, because your scripts will use the access key to access your online dynamoDB database.

If you get errors when running the upload script, that state that 'aws lambda' is not recognised, it is possible that you've got an old version of the aws program installed.  Try 'aws -v' or 'aws --version'.  Places to look are /usr/local/bin and /usr/bin.



Friday, 19 May 2017

Echo distingish multiple users

At the moment, if you have several uses in the household and you want separate functions e.g. diaries or music collections, you have to switch household user accounts which is quiet cumbersome.
There is work going on in recognising individuals voices but i think there is a much simpler idea ... Using different wake words.
So i can say "alexa what's going on today"
And my partner can say "john what's going on today"
And there is enough information to direct the search to the correct diary.
I suspect this idea is patentable but I've not the time to go through the process and would really appreciate the feature.
So the best way for me is to put it in the public domain for anyone to use.

Monday, 8 May 2017


Dealing with Amazon Service Simulator Errors

When developing Alexa skills, a message I frequently get on the Amazon Developer Console in the service simulator is:
"The remote endpoint could not be called, or the response it returned was invalid."
There are many causes for this, some of which are far from obvious.

1. The skill cannot be contacted.


Check the Service Endpoint Type and Endpoint in the Configuration in the Amazon Developer Console correctly point to a skill (either on a https server, or on, for example, the EU Amazon AWS Server).

2. The request is incorrectly formatted - invalid card

In the json response, if you follow the developer examples, you can have an entry

{
   "version": "1.0",
   "response": {
      "card": {},
      .....
This works fine when running on the AWS Server, but fails with the dreaded 'endpoint' error.  If you don't have anything useful to put in the "card", don't add the entry at all to the response.

3. The request is incorrectly formatted - invalid session

I captured the real output of an echo (see next section), and submitted it as a JSON input in the Service Simulator.  I had to manually add in the "attributes" line to make the simulator work correctly.  This code did, however, work in the AWS Lambda test environment.

"session": {
    "sessionId": "SessionId.blah",
    "application": { "applicationId": "amzn1.ask.skill.blah" },
    "attributes": {},
     "user": { "userId": "amzn1.ask.account.blah" },
     "new": true
 },

4. Check your Spelling

The spelling of the code is in American, not English.  Note that the AudioPlayer.Play is Behavior, not Behaviour!
"playBehavior": "REPLACE_ALL",

Things To Try

Copy and Paste from the Simulator

Use the service simulator in the developer console, then copy the JSON request, and paste it into the 'Actions / Configure Test Event' on the Lambda server.  This way, you may see the cause of the error.

Copy and Paste from a Real Echo

Modify your code to include the following line (or equivalent) in your javascript:
exports.handler = (event, context, callback) => {
   console.log("Request: %j", event) ;
   .....
Ask your Echo to launch the skill (using whatever utterance you need to test the appropriate function).  Now, launch the CloudWatch (link to EU Server), and look at the latest log entry.  You can then see the actual request from the Echo.

You can paste this request into the 'Actions / Configure Test Event' on the Lambda server, or into the JSON input in the Amazon Developer simulator.

You will notice that the Simulator and the Echo sometimes give differently formatted requests!
And the Simulator and AWS Lambda server give different responses (or errors!)



Wednesday, 3 May 2017

Part 2: Configure a Raspberry PI

Configuring a Raspberry Pi


This is Part 2 in the Playing Custom Media Streams with the Amazon Echo series of posts


Part 1: Playing Custom Media Stream with Amazon Echo Part I
Part 2: Configure a Raspberry PI
Part 3: Obtains and Install LetsEncrypt Certificates
Part 4: Design and Build a https relay
Part 5: Opening network ports to allow correct operation
Part 6: Developing a simple media player application
Part 7: Installing a modified UPNP media server
Part 8: Installing a pseudo-radio station and bridging the UPNP server to the https relay
Part 9: Adding Chromecast casting push support

Installing And Enabling Remote Login

Install the Operating system as follows:
  • Insert the PI Raspbery Pi NOOBS memory card
  • Connect the Raspberry Pi to a TV or monitor using the HDMI connector
  • Plug in a USB mouse and Keyboard
  • Boot the Pi, search for and setup the Wifi Network and enter the correct password
  • Select the XXX OS (1.1Gb), and then install
Enable remote logins, and set the hostname, and set other parameters:
  • Reboot, then login as pi/raspberry
  • sudo raspi-config
  • Change the hostname (option 2)
  • Enable SSH (option 5 / P.2)
  • Set the memory split to only reserve 16Mb for video (option 7 / A.3)
For me, the ssh server key files were all empty, so I had to re-build them:
cd /etc/ssh
ssh-keygen -f ssh_host_dsa_key -N '' -t rsa
ssh-keygen -f ssh_host_ecdsa_key -N '' -t ecdsa
ssh-keygen -f ssh_host_ed25519_key -N '' -t ed25519
ssh-keygen -f ssh_host_rsa_key -N '' -t rsa
Fix the server's IP address.  This is done most easily on the modem that issues the DHCP addresses based on the MAC address.  Alternatively, it can be done within:
  • sudo raspi-config
Finally:
  • Reboot the Pi

Locking Down

In order to lock down the Pi, there are a few recommended steps, including:
  • Setting up a new user account
  • Setting up an admin account
  • Removing the pi account
  • Configuring login via ssh
Firstly, create the users and groups, and when you do, make sure that the passwords are fiendishly complicated (once you've finished, you'll only need these to set up new connections).
sudo adduser localuser
sudo adduser localadmin
sudo addgroup datafiles
Edit the groups, and ensure that the users are in the correct groups, and pi is removed from all.
sudo vi /etc/group
adm:x:4:localadmin
dialout:x:20:localadmin
cdrom:x:24:localuser,localadmin
sudo:x:27:pi,localadmin
audio:x:29:localuser,localadmin
video:x:44:localuser,localadmin
plugdev:x:46:localadmin
games:x:60:localuser
users:x:100:localuser,localadmin
input:x:101:localuser,localadmin
netdev:x:108:localadmin
spi:x:999:localuser,localadmin
i2c:x:998:localuser,localadmin
gpio:x:997:localuser,localadmin
Yes, I realise that pi is still in the sudo group.  Now, disconnect, and re-connect as the user:
ssh localuser@pi-address
Enter the long password, and once connected:
mkdir .ssh      (this will be needed later)
su - admin
sudo vi /etc/group
remove pi from the sudo group.  Then edit the password and shadow password files:
sudo vi /etc/passwd
sudo vi /etc/shadow
remove the pi entry from each file, and then Finally, remove the pi home directory files:
sudo rm -rf /home/pi

Enabling Password-Free SSH Login from a Linux PC

Logout from the Pi, and on a Linux PC:
client$  ssh-keygen    (note: you only need to do this once - ever!)
client$ cat ~/.ssh/id_rsa.pub | ssh localuser@pi-ipaddress 'cat >> .ssh/authorized_keys'

Enter the long password (for the last time).

Now, you can login using ssh:
client$ ssh localuser@pi-ipaddress
If you need to perform any admin activities:
localuser$ su - localadmin
And enter the localadmin fiendish password.  This localadmin user can run jobs as root:
localadmin$ sudo programmename

Configuring the Wired Network Interface

If you wish to use the wired interface, simply plug it in.
It is recommended to set this up using your modem DHCP to bind the MAC address to a static IP address.

That's It

  • The Pi is now configured.
  • You can connect to it as a mortal user with: ssh user@pi-ipaddress
  • You can become the administrator with su - admin
  • You can run root tasks with sudo taskname






Tuesday, 2 May 2017

Playing Custom Media Stream with Amazon Echo Part I

Introduction

The Amazon Echo is a great piece of kit, which is flexible and can be expanded with the use of skills.  These skills are initially difficult to get your head around, but once you've managed one, they become easy to produce.

This note takes you through the steps taken to get an internet radio station to be played on an Echo.
Including developing the skill, setting up an encrypted link, and implementing a relay to a non-supported radio station or stream.

Background

Stateless and Variables

The Echo system is stateless, i.e. the servers normally do not hold the state for any conversation chain.  This is achieved by passing variables back to the Echo, and they can be used in subsequent conversations, for example:
"Alexa, ask flintstone film what is the name of the main character"
"The main character is Fred" (hidden note: you asked me what is the name of the main character, and I said Fred)
"Alexa, what is his wife's name" (hidden data - note: you asked me what is the name of the main character, and I said Fred)
"Fred's wife is Wilma"

Secure Communications

The Echo will only connect to servers that it can obtain trusted certificates for, which means that the connections must be 'https', and the remote server must have signed certificates from a recognised authority for the Echo to connect.

If the Echo is asked to connect to a site without the appropriate certification, no response may result.

On first look, this seems to be Amazon's effort to stop you writing functions such as playing music from your own media server (i.e. having to buy Amazon's services instead).  However, it does have some significant advantages, in that the Echo is not able (by protocol) to go snooping around inside your house because devices in your house are note accessible on the internet, and do not have the right certificates.

Radio Stations and Streaming Media Problem

Unfortunately, the majority of internet radio stations do not operate over 'https' with appropriately signed certificates.  The built-in TuneIn app either has some code which allows it to bypass this function, or TuneIn have a number of relays and will forward http based stations over a https connection.

I've not configured the echo to sniff the packets to determine which is is doing - if anyone has found out which, a comment would be great!

For me, the radio station I like to listen to is "IP Music Slow", which is based in Switzerland.  Unfortunately, TuneIn no longer works with this station, and tries to play "KP4IP" any time I request my station.  The TuneIn app is actually capable of playing "IP Music Slow" (if I select the station from the history in the web browser control panel, from a time when it did work), but if the speech recognition picks the wrong station, this is of very little use.

So, to stream your own station, it is necessary to build a http to https relay.  This method can also be applied to a media server (i.e. attach an ice-cast server to a UPNP media server, and forward the tracks as a radio stream).

A Media Relay System

Introduction

The system I've set up uses a Raspberry PI, which is accessible on the internet via a https link, and various pieces of software to allow it to be recognised and used as a relay.  Note that Amazon allow you to use a development configuration, where the certificate trust can be uploaded, and a root CA is not required, however, this note demonstrates a way of doing it 'properly'.

Component parts and steps
  • Part 2: Configure a Raspberry PI
  • Part 3: Obtains and Install LetsEncrypt Certificates
  • Part 4: Design and Build a https relay
  • Part 5: Opening network ports to allow correct operation
  • Part 6: Developing a simple media player application
  • Part 7: Installing a modified UPNP media server
  • Part 8: Installing a pseudo-radio station and bridging the UPNP server to the https relay
  • Part 9: Adding Chromecast casting push support