Saturday, 12 September 2015

JackD, Audio Applications and Ubuntu 14.04 64 bit

Just re-installed Ubuntu 14.04 (this time, the 64-bit version).
I'd already set up my bluetooth speaker: see this post.
The next step was to setup JackD so that I could perform sound / midi editing.

I started QJackCtl, and in settings, configured it to automatically run jackd, and show itself in the systray.  I got the following error when I tried to start the server:

D-BUS: JACK server could not be started
What I needed to do to get the server going was:

aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: HDMI [HDA Intel HDMI], device 7: HDMI 1 [HDMI 1]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: HDMI [HDA Intel HDMI], device 8: HDMI 2 [HDMI 2]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: PCH [HDA Intel PCH], device 0: ALC887-VD Analog [ALC887-VD Analog]
  Subdevices: 0/1
  Subdevice #0: subdevice #0
Here, the list tells me that my output device (Analog port) is card 1, so I need to set the hardware Interface to hw:1 in settings, not hw:0 as follows:


Now when I start QJackCtl, the JackD server launches automagically.

JackD and Friends

QJackControl and its Connections

All JackD applications communicate, and JackD is the process that allows this to happen. QJackControl is a front end to JackD, and allows you to manage what is connected to what.  In this example, you can see that the audio synthesizer is connected to the system (speakers).  In this setup, rosegarden (the Midi player) is connected to the synthesizer on the Midi tab.


So you can connect pretty much anything to anything.  When you get going with multi-channel applications, you can route different channels in different directions - route some channels through sound effects processors, route some instruments out to external Midi hardware players, - almost everything is possible.

This is a screenshot of rosegarden outputting Midi to qsynth, which is passing its output to ardour, where it is being recorded, and also passing its output to the meterbridge, and to the speakers.




JackD Compatible Software Tools

QSynth - Midi synthesis
QTractor - Midi Playing / Editing
Ardour - Multi-track Editing
MeterBridge -  Audio Level Monitoring

RoseGarden - Midi Editing

Other Tools

RipperX - CD Ripping
Audacity - Tempo Change (Change Tempo without Pitch Change, e.g. -50% doubles the track length)
Brasero - CD Burning







Friday, 11 September 2015

QT WebEngineView Communication with Javascript

Introduction

The QT WebEngineView is the new method of providing applications that display, and interact with web pages.

There is some documentation on the QT website on how to implement these functions, with references to the previous QWebView method, but these are lacking examples.

I have produced the following with QT5.5:

Download C++ QT Source for an example, which is a simple QT application with a dialog containing a widget (called HtmlPage).  This widget is based on QWebEngineView, and extends it very slightly to:

  • Initialise the web page, and set up a communications channel to it
  • Provide a function to request the JavaScript web page to  insert a dot at a given X/Y coordinate.
  • Provide a function to receive information regarding a cursor movement and emit a signal to the main window.

Similarly, the example web page has embedded JavaScript to:
  • Initialise and set up a communications channel to the QT application
  • Produce a  dot at a given X/Y coordinate.
  • Emit a signal (function call) to the QT application indicating the cursor has moved.
  • Include a signal handler to place a large dot at the mouse cursor position, and emit a signal when the mouse is clicked.

So the example has two-way communication with the JavaScript inside the web page.
  • When the mouse is clicked (Javascript), the handler places a large dot at the cursor position.
  • The Javascript then informs the QT C++ application with a £widget.functioncall£.
  • Some time later (asynchronously), the C++ application receives the message in the "functioncall" slot.
  • The C++ class emits a signal to the mainwindow to allow the X/Y coordinates to be updated on the screen.
  • The C++ class then makes a call back to the Javascript to place a smaller dot at the same position.
  • Some time later (asynchronously), the Javascript handler receives the message and places the dot.

Hopefully, the code will be somewhat explanatory, but here's a quick overview of the important bits:

C++

Add the "QT += webenginewidgets webchannel" to your project.pro file, and include the appropriate header files, and then in your C++ class / you need to set up the  communications channel.  Note that 'channel' should be declared in the class header.
// Set up the communications channel for this QWebEngineView parented class
this->page()->setWebChannel(&channel) ;
channel.registerObject("widget", this) ;
To call the Javascript, build a javascript function call into a string, and then call the page()->runJavascript() function.
QString command = QString("javascriptFunction(%1);").arg(functionParameter) ;
page()->runJavaScript(command) ;

To receive messages from Javascript, public slots must be used:
public slots:
    void updateComplete(int x) ;

Javascript

To initialise, the following script line should be included:
<script type="text/javascript" src="qrc:///qtwebchannel/qwebchannel.js"></script>
Then the <body> tag should have an onload="initialise()" option.
In the initialise function, the other end of the communications link should be set up, noting that the setting of (in this case) widget is asynchronous to the initialise function call.  Note also that 'widget' should be a global variable.
var widget ;

function initialise() {
  if (typeof qt != 'undefined') new QWebChannel(qt.webChannelTransport, function(channel) {
    widget = channel.objects.widget;
  } );
}
Some time after the initialisation, the widget variable will be defined (it will be of type QObject).  You can emit signals to C++ simply by calling the appropriate function, so for example:
widget.updateComplete(x) ;
Will emit a signal which will be captured some time later in the updateComplete(int x) slot in the C++ class. You need to appreciate that all calls between C++ and Javascript are asynchronous with QtWebEngineView.


Saturday, 13 June 2015

Building a Tour with Hugin and Pannellum - Part III - Publishing

Interactive Creation

Since writing this article in 2015, I've had a go at creating an interactive editor.  It's currently early days, however it mostly works, so I thought it appropriate to share.


Or, if you want to manually create the tour, or understand what is going on under the bonnet, carry on ......

Manual Creation

Quick Reference

  1. The software
  2. Convert the panorama to cubic
  3. Edit the floor
  4. Create the panorama
  5. Create the tour

The Software

I have found that Pannellum offers an excellent solution to presenting the panorama online:  It is open source and free; It does not require flash or java plugins; It is small; and it can be modified as required.

There are, however, a couple of tweaks I have made in order to better perform the processing:

  • Modification to the generate.py script to allow a two-stage generation (panorama to cube, and cube to multiview) - this gives you a stage where you can edit the floor; and to automatically generate test html pages.
  • Modification to the pannellum.htm renderer to allow it to run on a local filesystem for testing; and to fix some bearings bugs.
  • It should be noted that the pannellum is in development, and at the point of writing, a modification is being made to allow all locations to be oriented north-south, so you don't have to mess around with re-orienting the viewer when they enter each scene.
I will have a different blog entry for the modifications, after I can see which ones can be submitted to Matthew Petroff, the original application developer himself.

Convert the panorama to cubic

To convert the panorama to a cubic view, you will need to run the command-line script 'generate.py'.  All you need to do is pass it the source image, and the target directory (and tell it to just produce the cubic view).  
python generate.py PanoramaDirectory --r2c InputImage.tif
Each of the views is named facexxxxx.tif.


Edit the floor


This is where you can load the downward shot into your favorite image editor (I use Gimp) and fix the hole!

This is where you will also appreciate that you thought about where you put the tripod.

There are several techniques you can use:
  • Stretch the downward shot you did (without the tripod) and merge it in place.
  • Copy some other parts of the image, stretch and rotate as required to fill the gap.
  • Put a circular image of your own there.
  • Go back to the "Changing the Orientation" stage in Hugin (Part II) and in the previewer change the projection tab field of view to 360, and lens to fisheye, then on the move tab, drag the tripod hole to the centre of the image, export the image, edit in Gimp to remove the tripod hole, and then use the resulting circular image as the plug.


Create the panorama

Creating the panorama is easy, you just need to run the second part of the generate.py script:
python generate.py PanoramaDirectory --c2mv InputImage.tif

The panorama for the image will be created in the PanoramaDirectory.  At this stage, I copy the whole directory into the place where I am building my website, and I remove the facexxxx.tif and the .pto files, as they are only used in the creation of the tour, not the tour itself.

Each tour set has a sample config.json file, which you can use as part of your tour, e.g.:
{
    "type": "multires",
   
    "multiRes": {
        "path": "./%l/%s%y_%x",
        "fallbackPath": "./fallback/%s",
        "extension": "jpg",
        "tileResolution": 512,
        "maxLevel": 4,
        "cubeResolution": 3816
    }
}

Create the tour

This is probably one of the most time consuming parts, because the way panellum works is it loads the tour into a single (json) file.  If you follow these prescriptive rules, however, you should be OK.

I organise my tour folder structure as follows:
tour/
    pannellum.htm
    index.html
    config.json

    scene1/
    scene2/


pannellum.htm - This is the main pannellum processing engine - you can download the latest copy from pannellum.org.  You can also download and build from source if you want to fix, modify or tweak.

index.html - This is your webpage, which is used to pull everything together.  I use Cascaded Stylesheet, divs and classes to force the presentation (see here for the working example), but the simplified version is here:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="fr" xml:lang="en">

<head>
    <title>Tour Example - blog.trumpton.org.uk</title>
    <meta  charset="utf-8" />
</head>

<body>   
    <div id="widecontent">
        <iframe title="Tour" class="tour"
            webkitAllowFullScreen mozallowfullscreen allowFullScreen 
            src="pannellum.htm?tour=cambo.json">
        </iframe>
    </div>

</body>
</html>
config.json - This is the file where your entire tour is defined.  The file is split into 2 sections: The defaults and the tour, and the tour is split into scenes, each of which is split into 2 parts - the definition of the scene, and the hotspots to link to other scenes.  This is a cut-down example of my Guresekertua tour:

Start the json file:
{
The Default Section, including the first scene to use, and the pitch and yaw to use for the first view.  In this example, hotSpotDebug is set to true, which is useful when setting hotspots:
   "default": {
        "author": "Steve Clarke",
        "firstScene": "terrasse1",
        "pitch": 3,
        "yaw": 180,
        "sceneFadeDuration": 2000,
        "compass": true,
        "autoLoad": true,
        "autoRotate": true,
        "autoRotateInactivityDelay": 10000,
        "hotSpotDebug": true
    },

The start of the scenes:
    "scenes": {
The first scene (terrasse1), the adjustment to make the scene align due north (should not be needed if you aligned your photos when you took them), its title and scene information.

Note that the multiRes section can be copied from the config.json example in the appropriate scene directory, however, you need to include the basePath, and double-check the dots and slashes in the path and fallbackPath entries.

        "terrasse1": {
            "northOffset": -2,
            "title": "Gure Sekretua Terrasse",
            "preview": "./tour-cover.jpg",
            "type": "multires",
            "multiRes": {
                "basePath": "./Terrasse1/",
                "path": "./%l/%s%y_%x",
                "fallbackPath": "/fallback/%s",
                "extension": "jpg",
                "tileResolution": 512,
                "maxLevel": 3,
                "cubeResolution": 2048
            },
 The definition of the hotSpots (links to other scenes).  You can also have information points, and links to stills and videos, but those are not discussed in any detail here.

Each hotspot has a pitch and yaw (location of the hotspot in the current scene).  To work out what they should be, load the scene up in a web browser, enable the debugger (e.g. Control-Shift-I) and click on a point and read off the numbers.

The sceneId, targetPitch and targetYaw define where the hotspot links to.  If the targetYaw is "same", the orientation of the view is maintained when you enter the scene (no need to calculate), but this does necessitate that all scenes are correctly oriented north/south.

At the time of writing, if you enter a numerical targetYaw here, it does not take into consideration the northOffset of the target scene, so you have to do some subtractions for any yaw value you use.  I suspect this will be fixed in future releases.
            "hotSpots": [{
                "pitch": -1,
                "yaw": 86,
                "type": "scene",
                "text": "Chemin",
                "sceneId": "chemin",
                "targetPitch": -5,
                "targetYaw": "same"
            }, {
                "yaw": 167,
                "pitch": 2,
                "type": "scene",
                "text": "Appartement",
                "sceneId": "sejour1",
                "targetYaw": 178,
                "targetPitch": -5
           }, {
                "pitch": -5,
                "yaw": 48 ,
                "type": "info",
                "text": "La Gare et Bas Cambo"
            }]
        },
Other Scenes follow (note a comma does not follow the final scene):


        "terrasse2": {
            ....

        },

"sejour1": {
    ....

}
Finally, the file is closed
    }
}
When you've generated the file, you can test it.  Recommendation is to have the debug open in the web viewer, and make use of a json syntax validator if you need (see debugging, below).

Hints

Any of the parameters in the "default" section of the json file can be over-ridden in the html file which calls up the tour.  This means that, for example, the same tour files can be used, but different 'starting places' can be selected.
<iframe title="Frame Title"
  class="tour"
  webkitAllowFullScreen
  mozallowfullscreen
  allowFullScreen  
  src="pannellum.htm?tour=config.json&firstScene=sejour1"> 
</iframe>

Debugging

Running a Webserver

Now, it should not be possible to run the website from the local filesystem due to browser security restrictions (although I have found that these do not exist in Firefox, and Firefox allows you to access data through 'file:' references).  It has been hard-coded into pannellum to deny you access to local files from the pannellum.htm script, so you will need to run a webserver, or upload your files to a webserver to try.

Running a webserver is easy if you have python installed (which you probably will have if you have used generate.py earlier!).

Create a script to launch a webserver, then connect to it with "http://localhost:8000/" - I use Google Chrome.

#!/bin/sh
cd <path>/<to>/<your>/<local>/<website>/<image>
python -m SimpleHTTPServer

Running the Debugger

Turning on the debugger (Control-Shift-I in Google Chrome) is a useful way of getting things sorted.  Some example messages, errors and fixes are shown below:

hotSpotDebug

This shows the Pitch and Yaw of the mouse-click, and the centre of the image.  It needs "hotSpotDebug": true in the config.json file, or &hotSpotDebug=true in the URL used to load the tour - in the debugger, you will get messages such as this, everytime you click the mouse:
Pitch: 3.6160727325047777, Yaw: 184.96700450304274, Center Pitch: 3, Center Yaw: 169.18935205326287, HFOV: 100

Page Blank / Error Messages

If your page isn't showing, it could be due to bugs in your configuration.  To check the configuration is correctly formatted, you can use a context-sensitive editor (I use gedit on Ubuntu), or paste your code into an analyser / checker.

Example messages in the debugger / display:
Rogue '{' in the json file:
Uncaught SyntaxError: Unexpected token (r.config.M.onload @ pannellum.htm?tour=config.json&firstScene=scenename

Missing json file:

GET http://localhost:8000/config.json 404 (File not found)parseURLParameters @ pannellum.htm?tour=config.json

Incorrect firstScene reference name, either in the json file, or in the URL:
No panorama image was specified


My finished product can be found here.

Building a Tour with Hugin and Pannellum - Part II - Processing

Quick Reference

  1. Convert files from Raw to Tiff
  2. Add Files, Select Lens Type, e.g. Samyang Fisheye Multiplier 1.532, Angle of View 167
  3. Shift-F2 and Set the masks to remove the tripod / bracket etc
  4. Shift-F1 and Find control points (Hugin CP Find)
  5. Geometric Optimise Positions and Geometric Everything
  6. F3 and remove rogue control points
  7. Shift-F3 and Manually add control points if necessary
  8. Shift-F1 and Photometric Optomisation
  9. Control-Shift-P and change the orientation with Move/Drag northwards and level horizon
  10. Assistant, Create Panorama 

Converting Files

The best detail and most information is available in the camera's raw file format (for Canon, these are the .CR2 files).  JPEG uses only 8-bit depths, and discards information.  Hugin cannot work with CR2 files, but it does work with the next best thing (TIFF), so if you have managed to take CR2 images, convert them to 16-bit TIFF with minimal processing:
ufraw-batch --lensfun=auto --out-depth=16 --out-type=tiff *.CR2

Selecting and Loading Files

Launch Hugin Panorama Creator, and select expert mode (interface / expert).
Click the Add Files... button to load your images, and select the images for one panorama (remember to exclude the photo of your hand!).

If your lens does not have an electronic connection, and the lens characteristics are not stored as EXIF data within the loaded photos, you will be prompted for the lens type - the Samyang 8mm fisheye requires the following settings:

  • Lens type: Full Frame Fisheye
  • Focal Length Multiplier: 1.532 or 1.58 (TBC)
  • Angle of View: 167
For some reason, the load process asks the same question twice, but the answer is the same in both cases.

Stacking

If you have used bracketting, you will have a number of copies of each photo, each with a different exposure.  You will need to manually group the shots of the same exposure together.  Simply alt-click and select all photos taken of the same view, and then right click and assign to a stack (0 for the first).  Repeat for the other images.  So, if you have taken 3 pictures in the bracket (dark, normal and light), you will have 3 images in each stack.

Another trick here, is to assign the upward shot to have a different lens (1 instead of 0) - the lens is actually the same, but this assignment will make it easier when assigning masks.

Setting a Mask

Now the files are loaded, you need to mask out any part of the image that is not to be used for the alignment, and is not to be included in the final photos.

Select 'Add New Mask' and draw a box around any part of the panoramic head that you can see.  Double click to complete the mask, and make sure it is set to "Exclude region from all images of this lens" - that way, you only need to add a single mask.  If you assigned the upwards shot to a different lens, the upward shot will not have a mask, or can have a different mask, which is exactly what you need.


If you are using a normal lens you may want to add a separate mask for the images as you are likely to 'see' different parts of the panoramic head as you step through the photos.

You can also take the opportunity to mask out any anomalies e.g. a person who appears in two photos - the only caveats being that there must be sufficient unmasked overlap to enable the processing, and that every part of the panorama must be available at least in one photograph.

Finding and Optimising Control Points

Switch back to the first tab, and select "Find Control Points' using 'Hugins CPFind' and wait for the processing to finish.  This will search all of the images for the same features in overlapping regions.

Now, select 'Geometric Optimise Positions (incremental starting from anchor) - this will cause the control points to be fine-tuned.

All control points have a distance figure which shows how accurate the software believes the point is - at this stage, you should remove all of the wildly inaccurate ones:  press F3 to view the control points table, and press the 'select by distance' button - enter a low figure, say, 60 and select delete - this will remove the rogue control points.

If you have many points, you may be able to select a lower figure - the lower the better, but you mustn't remove too many control points, otherwise there will not be enough to do the stitching, and you must manually add control points - make sure you save before you start adding!

Now change the drop down for the Geometric to 'Everything without translation' which will further improve the tuning - you may also find that 'Position and view (y,p,r,v)' is effective.  Try different optimisations, and if the post-optimisation dialog says that everything went well, accept it, otherwise, try a different optimisation.

It may be that you will need to manually add some control points.

Manually adding control points is a difficult and time-consuming task, which involves carefully placing a cursor at the same point in two photographs.  When you have done this, for as many points as far apart from each other as possible, don't forget to re-optimise.

Keep going back to the control point list (F3) and remove the really bad points.  Ideally at the end of the optimisation, the worst distances are < 2.

Finally, for tuning, select 'Photometric, Low Dynamic Range'.

Changing the Orientation

With Control-Shift-P, open up the fast preview, and select Move/Drag - here you should drag the image so that North is in the centre of the screen, and the black masked out area where the tripod existed is stretched out along the bottom of the screen.

It is important to get the image horizontal, and you can use the 'Roll' and Apply buttons to rotate as necessary - when you are finished, the horizon should look flat, and the masked out area for the tripod should be as tall at the left and right sides of the image.

In the following image, you will see the result - note the inset image of the chair leg, where the tripod was placed too near to an object - in this case it was a near miss and the leg is fully defined, but only by a few pixels.  The more clearance you can have, the easier it is in the photoshopping stage later.

Saving the Output

Once the  image looks correct, you can now save the output to a tif file - I select 12000 x 6000, which provides an excellent level of detail in the post-processed images - don't worry about the loading speed, as the multi-res images still have a tiny initial image, and then do a staged load depending on where you are looking and at what zoom level. 

I find it easiest to use the Assistant/Create Panorama button for this.  Depending on the images you have, you need to select the best merging options - I've found that High Dynamic Range has been useful when bright skies are present.

And that's it for Hugin - now you have an equirectangular image for use in the next stage.

Building a Tour with Hugin and Pannellum - Part I - Taking the Photos

Background

I've captured a set of pictures to make a 360 degree tour 'a la streetview' using open source programs.  The resulting tour does not need any flash apps, and  works on PCs, Android and IOS, and is server-served, fast and easy to configure.

Quick Reference

  1. Mount camera on panoramic head
  2. Level tripod
  3. Align head left/right
  4. Set up by doorframe, pan left/right and adjust forward/backwards for no parallax
  5. Tuck in tripod legs
  6. place tripod away from near objects, check for mirrors and shadows
  7. Set hyperfocal length
  8. Set camera mode
  9. Take overlapping photos in clockwise circle (2 second delay)
  10. Take photo directly upwards

The Kit

It's not practical to take 360 degree photos without the use of a tripod and a mounting bracket because you get severe parallax problems, and it is almost impossible to stitch the photos back together.  I use a sturdy tripod and a panoramic head - this is a bracket which allows the camera to rotate in all dimensions about the lens, rather than the mounting point on the body, and can be bought for as little as £25.

As far as the lens of the camera goes, I started with the standard lens which came with my Canon EOS1000D, but found I needed to take over 100 photos to get a full panorama, and in some cases (particularly indoors) it was not possible to stitch the photos back together as some pictures ended up being of bare bits of wall with no distinguishing features (more on that later).  So, if you want to start with the lens that came with your camera (e.g. an 18mm-50mm zoom), start outside with at least 5m between you and the nearest object to have a chance.

I tried using a fisheye lens adaptor for the existing lens (i.e. a second set of lenses that plugs onto the front of your existing lens), but had a number of problems, firstly, the assembly was marginally too long for the panoramic head, and then, once I had managed to take the photos, the lens told the software that the photos were taken with an 18mm lens, but couldn't tell the software about the adaptor, plus I had no model of the adaptor so couldn't tell the software the right parameters (there is a calibration tool you can try), but most frustratingly, if I knocked the zoom whilst taking photos, I destroyed the set.

I ended up buying a Samyang 8mm fisheye lens for the camera, which in theory reduces the number pf pictures you need to take from 100 to just 5 - this has a processing advantage on the PC when stitching, as a panorama can be made in 30 minutes rather than 48 hours!  This particular lens is not expensive as far as lenses go, and the same lens is badged by different companies and supplied at wildly different prices, so check-out "Samyang 8mm f/3.5 Fisheye" and "Rokinon FE8M-C 8mm F3.5 Fisheye".

The lens comes with a large sun-shield, however I had to unclip this because it fouled with the panoramic head bracket.


Setting Up the Panoramic Head


It takes quite a while to set up the camera and lens on the tripod with the panoramic head, and this is a very important step as it is essential to remove any parallax.

To do this, you need to set the camera to rotate about the lens.  Firstly, adjust the tripod pan / tilt so that the mounting plate is completely flat as the head pans - this is absolutely essential if you are going to use the tripod head to pan (the angle doesn't matter at all if you are going to use the panoramic head to do the panning).

Fit the panoramic head to the tripod, and for the left-right direction, setting up is quite easy to do - with the tripod set up nice and level, look at the camera lens and adjust it so that it is directly over the centre of the tripod - use a plumbline if necessary.

Once the lens is directly above the centre of the tripod, you can now adjust the depth - i.e. move the lens back and forth until the parallax is removed.  Set the tripod up near a vertical object, e.g. so the camera lens is inches away from a door frame, in a position where you can see objects in the distance.  Tilt the camera (using the panoramic head) so that a distance object can be seen overlapping the door frame - it is important that this is on the horizontal centre line of the camera.

Now, pivot the camera from left to right, and adjust the front-back position of the camera until the distant object does not move with respect to the door frame at the left and right extremes.

At this point, your camera is set up in a no-parallax position.  If you were lucky enough to be able to use the fixed screwholes in the bracket, you can read-off the calibration numbers on the panoramic head for next time.  Note that you may be able to see the end of the bracket through the lens - don't worry about this, it can be masked out later.

The following picture gives an example of a correct parallax setup, with a door frame in the foreground, and an orange ball in the distance - you can see that the ball does not move with respect to the door frame when the camera pans left to right.



Setting up the Camera

Focus and Aperture

To get the best results, you need to used a fixed focus - this means that the same point in adjacent images is at the same focus, and the software has an easier ride when stitching them back together.  If you are using the same fisheye lens as I was, it's not a problem as the lens is manually focussed, but if you are using the stock lens, or a more expensive one, you need to turn off the auto-focus.

The Hyperfocal Distance is the optimum distance at which to focus when combined with an aperture setting (F-stop) that gives the best focus over the best distances, for example for the stock-lens, if you want everything from 1m to infinity to be in acceptable focus, you set the F-Stop to F16, and focus on something 1.5m away - there are apps for this, just such for "Hyperfocal".

Samyang 8mm fisheye settings:
  • F5.6, 0.65m focus (Everything from 0.32 to infinity will be in focus)
  • F8, 0.5m focus (Everything from 0.23 to infinity will be in focus)
  • F16, 0.3m focus (Everything from 0.13 to infinity will be in focus)

You can pretty much use F8 for everything, and around F8 is where you will probably get the sharpest images.
  • For Dark scenarios, use F5.6
  • For Bright scenarios (e.g. outdoors), use F16
Camera Mode and White Balance
Take your pictures in high-quality JPEG + RAW.

If possible, use exposure bracketting - set your exposure as low as it possible and take brackets 2EV in RAW format (when taking shots outdoor in sunny day use high F).

You can use Aperture Priority Mode, and simply set the F-Stop setting, or use the Manual Mode, but in the latter case, you will need to take into consideration the potentially wide range of brightness / darkness, particularly if shooting outdoors.

This bit is TBC, and needs some pictures and a better explination.

Taking Your Pictures

Pick your Day

Choose the right day and time for your photograph, considering shadows, the sun, light levels, dynamic range (light to dark) in the image and your camera's capabilities.

Placing the Tripod

Care should be taken when placing the tripod:

  • Ideally, find a patch of non-descript floor or ground, at least 1m away from anything - this will help with photoshopping later!
  • Tuck the tripod legs in as far as you can whilst keeping it stable - again, this makes editing easier later.
  • Check for mirrors and reflections in windows of the tripod itself.
  • Place the tripod in such a position that its shadow is obscured, or can be photoshopped out later.

Taking Photographs

I find it best to put the camera on a 2 second delay, that way the camera is stable when the photo is taken, plus you have time to get your shadow out of the shot.

Adjust the camera downwards so that it can just 'see' the tripod and the panoramic head.  Now, take photos, moving round in a clock-wise direction, making sure that some features in the previous photo are present in the next (at least a 20% overlap is ideal) - with the Samyang 8mm lens, this means you should only need to take 4 photos, however I usually space it out with 6 photos for outdoors, or more if I am close to a wall or indoors.

For each photo, make sure the camera is stable, and watch for moving objects - e.g. people walking, or close-up trees waving in the wind.  If there are any, get them into the centre of an image, not in any overlap region, and make sure they only appear in one photo!

Once you've panned round, move the camera up - for the Samyang 8mm lens, this literally means straight up!  If you've got a standard lens, you will need to take photos in several bands until you hit the top!

If the day is particularly bright, and you don't have a great dynamic range for your camera, you can consider: taking images in RAW format, and taking a second set of images with a smaller aperture or faster shutter speed.

Note that you will need to be quick if there is a lot of movement - e.g. clouds moving, and this is where using a standard lens is very difficult.

Once you've taken your photo set, take the camera off the tripod, and take a photo straight down - this may be useful in 'filling the hole' later!

Mark the End of a Panorama

Once i've taken a complete set of pictures, I take a photograph of my hand to delimit one set from the next - this is particularly useful if you adjust the settings or position very slightly between sets.