Friday, April 12, 2013

i7 3930K (LGA2011) build: Asus P9X79

I spent a good few weeks researching and reading up on my latest monster video editing box.  Though it was expensive, I decided to go for the i7 3930K chip, a pricey $539 on sale from NewEgg.  Listed below are the parts that I assembled for the box.  The mainboard and the chipset were my primary concerns, but the other parts were chosen because they conformed to my build specs and they were on sale or discounted in one way or another during one of NewEgg's Build sales.

The cost of the items weren't bad, but they certainly added up:










1 x ($-75.00) DISCOUNT FOR AUTOADD #76165
$-75.00

1 x ($-85.80) DISCOUNT FOR PROMOTION CODE
$-85.80

Subtotal:$1707.10
Tax:$120.06
Shipping and Handling:$7.99
Total Amount:$1835.15


Here's a pic of them fresh out of the shipping box:
Because I don't build too often, it took about a day to assemble the main board and the components in the above picture.  It took another 1/2 day to port my four drive RAID 1+0 set from my old video editing workstation to this one.  Of course, I had to create a new RAID set on the new box.  I will no longer be using the trusty 3Ware 9650SE RAID card, as the new mobo has both Intel and Marvell RAID chipsets on board.  The Marvell only supports two drives, so I used the Intel to connect up my four 1.5TB Western Digital Green drives.

As all my content (photos/docs/videos/music/virtual machines/etc) is on this redundant RAID set, I made sure to backup the ext4 filesystem that sits on top it with fsarchiver.  Secondly, I bought a 3TB drive as a utility drive to try out my fsarchive restores.  This is a good system, as it confirms my backup is valid:
1) unmount content filesystem (the filesystem that's on top of RAID)
2) backup content using fsarchiver to a system drive with enough space to hold the 1.4TB of content
3) as my livelihood depends on my content, test the fsarchiver restore process by restoring the archive to a third drive, a 3TB Seagate partitioned with GPT and formatted using ext4
4) when the restore is validated as good, copy the files from the restore over to my new RAID 1+0 set. 

I love the Corsair 400R case.  With the nice cable tunnels, I was able to keep the mainboard area pretty free of cables, though it doesn't look like it from the below pic.  You can see the pipes to the Corsair H100i in the center of the photo.





































I ran most of the cables through the tunnels.  You can see how many cables were hidden using these tunnels:





































I'm not a great cabler and I just wanted most of the cables out of the way.  Given that there are five 3.5" drives, one 2.5" SSD and a DVD player in the box, I think I did alright.

The real test was firing the box up for the first time.  I was rather shocked when it did come up, mainly for the worry that the memory wouldn't be compatible.  But it was.  Also, I had invested a great deal of time before the build in reading the 176-page manual and watching a bunch of build videos listed at the bottom of this post.  So the base build with a single SSD hard drive powered up successfully.  Hooray!

The next hurdle was installing a basic operating system, Fedora 18 64-bit on my single SSD in the machine.  Again, surprisingly, this worked without a hitch.  I had spent a good deal of time reading the FedoraProject's list of UEFI bugs and I was suspect of problems, but was very happy that I didn't encounter any.

The final hurdle was migrating my four drives from my old workstation to run a RAID 1+0 configuration on the new box using the ASUS's Intel RAID chipset.  Once I moved the drives over, I  configured the BIOS in the Asus P9X79 to run RAID.  I then created the RAID set and rebooted the box.  I then saw three main screens on bootup:
1) The Marvell RAID BIOS bootup screen:


















2) The Intel RAID BIOS bootup screen:


















3) The American Megatrends BIOS screen:


















The American Megatrends BIOS and ASUS's UEFI BIOS screens are completely configurable and nicely laid out.  I won't be using many of the options, but it is just a pleasure to have a system that is so well stocked, but boots up quickly.  I'd say it takes about 30 seconds to get from cold start to my initial Fedora 18 grub2 prompt.

I haven't overclocked the mobo yet, but according to what I've been reading and the fact that I have good chip cooling via the H100i, I should be able to push the i7 from 3.2 to at least 4.6Ghz.

Wattage Used for Typical Tasks
Here is a chart of the tasks running in Fedora 18 x86-64 and the wattage used








Gnome System Monitor graphic
On the top row, the CPU History chart, you can see that my H264 video encode took about 40% of the CPU.  You can see all twelve CPUs being used.  On the bottom row, the Network History, you can see my upload to Vimeo.  The upload of my 530MB file took about 150s at a peak upload rate of 40Mbps.  That 40Mbps upload is courtesy of FIOS.





























Next up: installing Windows 7 Professional so that I can do some baseline performance measurements.

ciao!
TAG

UPDATE 2013/11/23
I've been so busy at work for the last six months that I haven't had time to use the new box.  Turns out that I made a mistake in copying over my files to my new drive..I hadn't preserved the time/date stamp of all my files, normally accomplished with a "cp -Rp .."!  This means that I then can no longer tell when I edited any of my scripts, took a picture, etc, etc.  Bollucks!  So I had to grab the timestamps from my original drive and apply them via "touch".  The procedure looked something like this:

sudo find  -printf "%t %h/%f\n" > ~/temp.txt
awk '{print "touch -d \""$1,$2,$3,$4,$5"\"",$6}' temp.txt > fixDate.sh

temp.txt looked like this:
Thu Nov 22 12:37:26.0000000000 2012 ./sodo
Sat Jan 22 10:29:32.0000000000 2011 ./sodo/.netbeans
Sat Jan 22 10:29:32.0000000000 2011 ./sodo/.netbeans/.superId
Sat Jan 22 11:58:29.0000000000 2011 ./sodo/.netbeans/6.7

fixDate.sh looked like this:
#!/bin/bash -xv
/usr/bin/touch -d "Thu Nov 22 12:37:26.0000000000 2012" ./sodo
/usr/bin/touch -d "Sat Jan 22 10:29:32.0000000000 2011" ./sodo/.netbeans
/usr/bin/touch -d "Sat Jan 22 10:29:32.0000000000 2011" ./sodo/.netbeans/.superId
/usr/bin/touch -d "Sat Jan 22 11:58:29.0000000000 2011" ./sodo/.netbeans/6.

Then I plopped a #!/bin/bash at the top of the shell script, chmod a+x fixDate.sh, and off I went.  But the script worked, thankfully!  Unfortunately, I wasted about two hours dealing with this today.  Super drag!

A second problem I see is when I use "mv" to move files to my new RAID set, the mv command gives me:
mv: setting attribute ‘security.selinux’ for ‘security.selinux’: Operation not permitted

Haven't figured out why tho.  Argh.
*** end update ***


References
ASUS P9X79 Home Page: https://www.asus.com/Motherboards/P9X79_DELUXE/
Manuals/Qualified Vendor Lists/Firmware Updates: http://usa.asus.com/Motherboards/Intel_Socket_2011/P9X79_DELUXE/#download

JJ of ASUS Republic of Gamers (asusrog) was immeasurably helpful in the builds.  In fact, I probably wouldn't have bought the board if I hadn't seen these videos.  Here are the core videos that helped me:
X79 Overview, Part I: http://www.youtube.com/watch?v=qGgQG3ANtaA
X79 Overview, Part II: http://www.youtube.com/watch?v=iE-YJafM65k
P9X79 Deluxe Hands-On Review: http://www.youtube.com/watch?v=xBPgv8aiL0Y
Build, Part I: http://www.youtube.com/watch?v=j9aY1HGYIyM
Build, Part II: http://www.youtube.com/watch?v=W8jogtOzw6Y

Overclocking on Sandy Bridge
Auto Overclocking: http://www.youtube.com/watch?v=Ct6tQaEsWYY
Part I: http://www.youtube.com/watch?v=Kx2z07sFM2I
Part II: http://www.youtube.com/watch?v=seVPIR06ZY4
Advanced Overclocking: http://www.youtube.com/watch?v=CZfOEs5n0jo

Non-JJ vid on overclocking: http://www.youtube.com/watch?v=8qz8olA8bRM

RAM Cache/SSD Cache
How to Setup a RAM Cache on a P7X79: http://www.youtube.com/watch?v=qGrqMllHVVY
RamCache/Ramdisk: http://www.youtube.com/watch?v=5cqfhZvyE80
SSD Caching: http://www.youtube.com/watch?v=uhCGo0LWaTY
SSD Caching (short version): http://www.youtube.com/watch?v=hvfiRA6hDhE


Cable Management
http://www.youtube.com/watch?v=5xKIiAu4rio

Sunday, March 17, 2013

backing up my systems: it ain't my day (or month)

OK.  I've been in three weeks of hardware hell, mainly due to the fact that I wanted to get my backups for all my machines (a MacBook Pro, my main Linux video editing workstation and an older Windows Vista digital audio workstation (DAW)) properly backed up.  I detailed my strategy for this in my last post.  This post is more of a rant than anything else, so please excuse the lack of any real mentorship on problem solving, except maybe "Google is Your Friend."

Issue #1: Drobo runs out of space
The Drobo has been a fine unit for me.  But as time goes on, you acquire more media and your available space runs out.  You'd think it would be a simple matter of buying a new disk, putting it in the Drobo and letting the BeyondRAID rebuild it's array.  Well, the first drive I bought, a Western Digital Green 1TB, died after the first rebuild.  That never happened to me before, where a drive failed out-of-the-box for me.  Never having that problem before, I didn't truly believe it was dead.

With my non-belief firmly in place, I tried to use the drive in different capacities.  So as a test, I formatted the disk using my Thermaltake BlacX connected to my Mac.  I was able to copy files over to it (though I didn't copy gigs and gigs worth as a true test).  But when I put the unit back in the Drobo, the Drobo gave an immediate "red" light for that drive bay, indicating the drive was bad.  I switched drives in the Drobo unit around, because I thought it could have been a faulty drive bay.  

Then, I had the bright idea to move the data off my 2TB system drive of my main Linux machine to the new Western Digital, put the 2TB in the Drobo and use the new 1TB (which I really thought was a good, error-free drive) as my Linux system drive. So still thinking that the 1TB drive was good, I would have to do some fancy footwork in order to make this possible as the system drive was a logical volume.  This entailed a week of work to figure out how to shrink a logical volume in order to fit the used space of the 2TB drive (which was less than a terabyte) onto the 1TB.

I learned a lot from that experience, to be detailed in a later post.  Suffice it to say that in the end, the 1TB was truly dead and I ended up getting a new 1TB (a Western Digital Black) from BestBuy and that solved my Drobo storage issue.  Kudos to BestBuy, as they were able to give me the Black at the same price as the Green for my trouble.

Issue #2: Mac Time Machine "the identity of this backup disk has changed" (Sparsebundle Problem)
This was an odd one.  After installing the new disk in the Drobo, Time Machine started showing the error "the identity of this backup disk has changed".  From the below post:

I executed the "chflags" command listed.  This ran for about four hours.  After, I tried to execute the "hdutil" command listed, but the Mac said it had already ran the command.  So testing the result of the chflags command, I shutdown and restarted the Drobo.  When Time Machine started backing up, it no longer gave me the error.  Hooray.  Another one down.

Issue #3: Windows Vista DAW crashes
So after a week spent on #1 and #2, I was ready to start work on a new musical project with some friends.  Firing up my old Dell 400SC running Windows Vista (OK, OK..I know I need to upgrade Win7, but I've got a recording session coming up soon and didn't want to change OS's yet), I was presented with this error:
c\windows\system32\config\system corrupt

Oh, wonderful.  So I popped in the Vista Ultimate DVD and selected "Repair".  After it ran, the system rebooted and I was pleasantly surprised to find that this fixed the problem and that I was able to get back into the system.

Getting back into the system, I reasoned that if the drive was going bad, I'd better make a backup.  So I ponied up $40 for Drobo's PC Backup product, the ugly step-brother of the seemless Drobo integration with Mac Time Machine.  Assuming the PC product worked the same way the Mac product did, I selected the defaults.  Well, the defaults do NOT backup the entire drive.  Only your user data.  My bad for not reading the fine print, but I believe that a Drobo product should be consistent between systems and the default should be to backup your entire drive with all system data included, as long as you have the space on your Drobo.  But that's just me.

The missing data would be crucial for what happened next.

Issue #4: Windows Vista DAW crashes again
After taking a two day hiatus from my backup shenanigans, I fired up the DAW again.  And guess what..a new error appears:
\Windows\system32\winload.exe is missing or corrupt (status 0xc000000f)

Oh great.  Going back to my ritual, I loaded in the Vista Ultimate DVD and selected "Repair".  However, after the reboot, no go..still the same "missing or corrupt" error.  I tried a number of times doing the repair, as the Vista repair process would show slightly different screens every time it booted and recognized the system.  This gave me false hope that the DVD was actually repairing something correctly.  Also, the frustrating part of this process that for whatever reason, the DVD would take 10 minutes to load on my Dell.  I'm not sure what the problem was there.  So I chewed up a few hours doing this multiple times.  

Finally, after reading some Google posts by people with the same issue, I decided to run "chkdsk /r" from the command line, rather than relying on the non-informative Windows Vista screen to run some unknown fix command.  I had to specifically boot into the System Recovery Options screen as shown in the below post:

Once I was there, I selected "Command Prompt" and typed in good ol' "chkdsk /r", the "repair" option to chkdsk.  This time, I was rewarded with an actual status screen that told me "bad clusters found", Windows was marking the clusters as bad and was moving the files located on those clusters to good sectors on the disk.  (Sectors and cluster primer here: http://t.co/DLFjrXAp5C).  This process took about three hours, unlike the half-hearted effort that Windows Vista attempted.  I wonder why Vista did not default to doing a real "chkdsk /r".  That doesn't help anyone who has a failing disk.  Bad default!

After the bad cluster identification and repair, I was really glad to see Vista boot up properly!  But since there were so many bad clusters, I had to make a full backup or clone of that drive but quick!  For this, I popped in an unused 500GB SATA I had lying around.  I repartitioned and formatted this drive.  It had been a second Vista system disk and one point, so I knew the drive's main partition was marked as bootable.  So I was good to go there.  I then dragged all the files from my C: onto the new E: (my DVD being the D:).  However, on bootup, Vista showed an error:
"System volume on disk is corrupt"

I suspected this was a problem with the NTFS boot files on the 500GB drive as they had links from the partition map from the old 256GB drive that was failing.  Luckily, when I ran Vista repair, Vista was able to fix this issue and the system started properly.

Issue #5: Windows Vista continually keeps "preparing your desktop"
After the system came up, I made sure all my applications (Reaper, Drobo PC Backup, etc) were working properly.  Unfortunately, they were not, as Vista continually kept giving me the message "Preparing Your Desktop" when I logged into my profile.  I tried a number of things from Google, but those suggestions did not work.  I didn't have any critical data in the old profile, so I figured I'd bit the bullet and create a new profile.  After doing this, the message disappeared and I was able to save my desktop settings and application preferences properly.

In Sum
Wow.  So this has been three weeks of hell.  I "think" I am back to steady state with my systems.  I was able to reset Drobo PC Backup to a full system backup of my Vista DAW to the Drobo.  The Drobo is backing up the Mac just fine and CrashPlan is encrypting my main Linux box backup to the Cloud.

Maybe now I can go outside and get some sun?
TAG

Saturday, March 09, 2013

protecting your data, locally and globally

I've spent a number of years trying various backup methods for my Linux box.  I think I finally have a pretty good one down.  The main idea is to setup my system in order to make backup and restore easier. This setup involves two components:
- a data source: my documents, videos, audio files and pictures are stored locally on a redundant  hardware RAID5 set
- the separation of system and data partitions on different physical devices (hard drives)

Backup Strategy
My actual backup strategy comes in three parts:
- an archive solution: fsarchiver to backup the system and data partitions
- network backup: a network backup device such as a NAS or in my case, a Drobo
- global backup storage solution: unlimited CrashPlan account

This backup strategy has been working well, though not without hiccups along the way.  It protects my important data by providing redundancy at multiple levels.  At a high level, here is how this is implemented:
- disk redundancy via RAID5 set
- local redundancy via network addressable storage
- global redundancy via CrashPlan if my house is destroyed

More specifically:
- my Linux system drive, Fedora 17, is one physical SATA drive
- my data drive is a hardware RAID5 unit using a 3Ware 9650SE with four physical SATA drives
- when I install a new OS, I use symbolic links (screen cap) in my user's home directory to point to my data, explained below

The bottom line is that no single backup method should be your entire backup strategy.  If you only have two of these methods implemented, you're better off than most people.

System Setup
Symbolic links are the key to segment the system partitions from your data partitions.  Segmenting is important because it separates your system from your data.  With this separation, it is much easier (from a Linux perspective), to upgrade and try different versions of Linux on your system drive, while your data drive stays essentially untouched and less prone to upgrade or experimentation tragedies.  Welcome to Linux!

Technically, here is how segmentation is implemented.  On my system drive, I mount my data partition.  In this example, I'm using /mnt as the mount point for my ext4 data partition:
[sodo@computer ~]$ cat /etc/fstab
/dev/mapper/vg_computer-lv_root /                    ext4    defaults        1 1
/dev/mapper/vg_computer-lv_home /home                ext4    defaults        1 2
/dev/mapper/vg_computer-lv_swap swap                 swap    defaults        0 0
/dev/mapper/vg_ogre-lv_root /mnt                     ext4    defaults        0 0

I have my content folders on the data partition that I will symbolically link from my system drive:
[sodo@computer mnt]$ ls -ltr /mnt
total 36
drwxr-xr-x. 16 sodo sodo 4096 Dec 21 19:04 MusicLibrary
drwxrwxr-x.  9 sodo sodo 4096 Jan 12 12:07 videos
drwxr-xr-x.  8 sodo sodo 4096 Feb  1 10:57 doc
drwxrwxr-x.  8 sodo sodo 4096 Mar  2 09:26 pictures

I then create symbolic links from my user's home directory to the equivalent directories that I've setup on my data partition:
[sodo@computer ~]$ ls -l | grep '^l'
lrwxrwxrwx.  1 sodo sodo    8 Oct 23  2011 Documents -> /mnt/doc
lrwxrwxrwx.  1 sodo sodo   18 May 30  2011 Music -> /mnt/MusicLibrary/
lrwxrwxrwx.  1 sodo sodo   14 Sep  2  2011 Pictures -> /mnt/pictures/
lrwxrwxrwx.  1 sodo sodo   12 May 28  2011 videos -> /mnt/videos/

With the symbolic links in place, I've made the link from my system to my data.

The Archive-Backup Process
I am going to use the terms "archive" and "backup" synonymously.  The overview is that I'll show how I back up my data partition using fsarchiver and then I'll copy those backups to both my local network and global storage solutions.  I will only show the archive process for the data partition.  Feel free to extrapolate the information herein to do the same for your system partition.

The Core Archive Process
1) check the used space on the source partition to be archived, as well as the available space on the destination/target for the backup (fill in missing info).  From the below example:
a. source filesystem (the data partition): vg_ogre-lv_root
b. destination partition (for backup storage): vg_computer-lv_home
[sodo@computer ~]$ df -H
Filesystem                       Size  Used Avail Use% Mounted on
/dev/sda1                        508M   80M  403M  17% /boot
devtmpfs                         5.3G     0  5.3G   0% /dev
/dev/mapper/vg_computer-lv_root   53G   15G   36G  30% /
/dev/mapper/vg_computer-lv_home  2.0T   .2T  1.8T  67% /home
/dev/mapper/vg_ogre-lv_root      4.5T  1.4T  2.9T  32% /mnt

2) verify available space on the destination filesystem
The source partition is using 1.4TB on vg_ogre-lv_root and I have 1.8TB available on the destination for the backup, lv_home.  So..good to go.

3) if there is enough space on the target filesystem, prepare to run fsarchiver and unmount the data partition.
In order to keep the filesystem from being updated during the archive process, fsarchiver asks to unmount the target filesystem before making the backup.  Like so:
[sodo@computer ~]$ sudo umount /mnt/

The nice thing about the split system-data partition setup is that it is unnecessary to load a Live CD in order to backup the data partition.  Normally, one has to boot with a LiveCD in order to backup the system partition.

4) Once the filesystem is unmounted, run fsarchiver.  As one of the destinations for the backup is the cloud, use the -c option to encrypt with a password:
[sodo@computer ~]$ sudo fsarchiver -j8 -c [password] -o savefs ~/f17backup/backup_lv_root.fsa /dev/mapper/vg_ogre-lv_root

The archive of my 1.2TB drive took five hours on my eight-core, 1.6Ghz Dell SC1430.
[sodo@computer ~]$ ll ~/backup/backup_lv_root.fsa 
-rw-r--r--. 1 root root 1186229328445 Mar  5 08:32 /home/sodo/backup/backup_lv_root.fsa

Copying the file to network-based storage
I have a 2.5TB CIFS (Windows share) created on my Drobo.  On my Linux box:
1) I mount the Drobo filesystem:
[sodo@computer ~]$ sudo mount -t cifs //drobo/Linux /mnt/drobo -o credentials=/home/sodo/smb.credentials

2) Copy the archive to it:
[sodo@computer ~]$ sudo cp -rp ~/f17backup/ /mnt/drobo/

Copying the file over my home network took about 16 hours.

A Word About the Drobo
The Drobo has been one of the easiest storage and backup solutions that I've ever used.  It integrates seemlessly with Time Machine for my MacBook and I've created a Windows share on the device in order to copy my Linux archive.

Over the past few years, I've expanded the drives within it about three times now.  I went from four 500GB drives, to four 1TB drives and one 500GB to my configuration as it is today, five 1TB drives.  The drive upgrades were easy, though time consuming.  I removed each of the older drives one at a time and replaced them with the larger drives.  Each time a drive was upgraded, the Drobo would automatically and non-destructively rebuild it's storage protection.  The Drobo's storage protection is called BeyondRAID, Drobo's own custom algorithm on top of RAID.

The integration with Mac is seemless; however, the Windows/CIFS file share can be a bit wonky, as the share has a tendency to become unavailable for whatever reason.  The resolution is to shutdown and restart the Drobo and that seems to fix the problem.

Cloud-Based Storage
A last layer of protection above and beyond the local and network copy of my data is to copy the encrypted archive to a cloud-based solution.  The purpose is to protect my data in case of a natural disaster that destroys all my local storage media.  With the increasing amount of natural and man-made disasters happening these days, I've recently invested in a data protection plan with www.crashplan.com.  I got an unlimited plan to store my 1.2TB of data to the cloud.  Most of that data is audio, video and image files.

After tweaking the CrashPlan app to pump more data through my local and wide area network (from 1280KB and 2560KB, respectively, to about 6400KB for both), it took about two weeks to upload this amount of data to CrashPlan's cloud!

In sum, if you have a lot of data, all these procedures take time.  From backing up, local network copy and then cloud copy.  If you're using Linux, you probably have the stomach for all this.  In the end, though, you'll have a backup solution that is pretty solid and relatively easy to implement unlike custom scripted solutions.

Love to hear any comments on how you backup your systems.

ciao!
TAG

References
http://crazedmuleproductions.blogspot.com/2010/02/fsarchiver-good-backup-for-ext4.html
http://www.drobo.com/how-it-works/beyond-raid.php
http://support.crashplan.com/doku.php/recipe/stop_and_start_engine

Here are some of my earlier articles on fsarchiver and a review of the Drobo.

Sunday, February 17, 2013

capturing a live stream from the GoPro Hero 2 to a PC

I wanted to make a multi-camera video using Linux of my new drum set, the Roland TD-30K.  Also, I didn't want to fuss with storage media; in other words, the dance of:
- record video to a device
- pull out the storage media  from the device (Compact Flash/tape/SDHC card, what have you)
- connect the storage media to the PC
- copy the files to the PC

What a pain that sh1t is. So the second goal was to record all three streams direct to the PC.  Now, I have an older JVC HD10U 720P video cam.  I knew I could grab the stream over firewire.  Also, I have a Logitech C910 webcam.  With the Logitech, I knew I could grab its video as well.  I had two other cameras as potential for this experiment:
- my lovely Canon 5D or
- the GoPro Hero 2

The Canon wouldn't work, as there is no streaming option by default.  I'd need some hardware HD streaming solution to do that:
http://oliviatech.com/asus-wicast-streaming-wireless-hd-video-from-camera
http://www.fcp.co/dslr/rigs/81-stream-video-from-your-dslr-over-wifi
http://www.blackmagic-design.com/products/intensity/

However, with the release of the GoPro iPhone app, the GoPro guys have coded a way to stream; it was just a question of whether or not I could hook into it.  After doing some digging on this great thread over at the GoPro User Forums (http://goprouser.freeforums.org/howto-livestream-to-pc-and-view-files-on-pc-smartphone-t9393.html) and with a healthy dose of experimentation, I was able to do it.  More for my notes than anything else, here's how you can stream from your GoPro Hero 2 to your PC.

Requirements
- GoPro Hero 2
- GoPro WIFI BacPac
- PC with FFmpeg installed (my test results were conducted on Fedora 17 Linux)
- Validate that your Hero 2 and WIFI BacPac are up-to-date with the latest firmware versions. As of this writing on 2/17/13, those versions are:
Hero 2: 8.12.222
BacPac: 3.4.1

If the software on these devices is not up-to-date, the below instructions won't work for you.  Also, I don't know if these commands work on a Hero 3, as I don't have one.  Though someone apparently got it to work here:
http://goprouser.freeforums.org/gopro-hero-3-wifi-streaming-and-browser-access-t10531.html

1) Update the firmware on your BacPac and GoPro Hero 2 if they are not updated!
2) turn on the BacPac
3) turn on the GoPro Hero 2
4) When turned on, press the menu button on the WIFI BacPac.  The screen of the Hero 2 will give you the option to connect to a Phone or Tablet.  Select this option.

This is interesting because when you select the option, the BacPac creates it's own wireless access point at the IP 10.5.5.9.  I created an /etc/hosts file entry for the BacPac so that I didn't have to type the IP all the time:

[sodo@computer ~]$ ping gp
PING gopro (10.5.5.9) 56(84) bytes of data.
64 bytes from gopro (10.5.5.9): icmp_req=1 ttl=64 time=1.81 ms
64 bytes from gopro (10.5.5.9): icmp_req=2 ttl=64 time=0.742 ms

If you do a network map of the 10.5.5.0/24 network, you'll find two devices:
[sodo@computer ~]$ nmap -A 10.5.5.0/24
Starting Nmap 6.01 ( http://nmap.org ) at 2013-02-17 11:38 EST
Stats: 0:01:11 elapsed; 254 hosts completed (2 up), 2 undergoing Service Scan
Service scan Timing: About 85.71% done; ETC: 11:39 (0:00:11 remaining)

Nmap scan report for gopro (10.5.5.9)
Host is up (0.0060s latency).
Not shown: 998 closed ports
PORT     STATE SERVICE VERSION
80/tcp   open  http?
8080/tcp open  http    Cherokee httpd 1.2.101b121204_
1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at http://www.insecure.org/cgi-bin/servicefp-submit.cgi :
SF-Port80-TCP:V=6.01%I=7%D=2/17%Time=51210783%P=x86_64-redhat-linux-gnu%r(
SF:GetRequest,C6,"HTTP/1\.0\x20\xff\xfbAllow:\x20GET\x20\r\nAccept-Ranges:
SF:\x20bytes\r\nCache-Control:\x20no-cache\r\nCache-Control:\x20no-store\r
SF:\nConnection:\x20Keep-Alive\r\nServer:\x20GoPro\x20Web\x20Server\x20v1\
SF:.0\r\nContent-Type:\x20text/plain\r\nContent-Length:\x202\r\n\r\n")%r(F
SF:ourOhFourRequest,C6,"HTTP/1\.0\x20\xff\xfbAllow:\x20GET\x20\r\nAccept-R
SF:anges:\x20bytes\r\nCache-Control:\x20no-cache\r\nCache-Control:\x20no-s
SF:tore\r\nConnection:\x20Keep-Alive\r\nServer:\x20GoPro\x20Web\x20Server\
SF:x20v1\.0\r\nContent-Type:\x20text/plain\r\nContent-Length:\x202\r\n\r\n
SF:");
Service Info: OS: Unix

And
Nmap scan report for 10.5.5.109
Host is up (0.0011s latency).
Not shown: 995 closed ports
PORT      STATE SERVICE                VERSION
22/tcp    open  ssh                    OpenSSH 5.9 (protocol 2.0)
| ssh-hostkey: 1024 84:a4:7c:c7:42:e5:53:f3:64:a2:89:2c:5e:97:26:60 (DSA)
|_2048 a2:27:68:40:d4:9f:90:72:f2:6c:0b:99:75:67:c3:76 (RSA)
111/tcp   open  rpcbind (rpcbind V2-4) 2-4 (rpc #100000)
| rpcinfo: 
|   program version   port/proto  service
|   100000  2,3,4        111/tcp  rpcbind
|_  100000  2,3,4        111/udp  rpcbind
888/tcp   open  ssl/hbase-master       Apache Hadoop Hbase 2.0
|_sslv2: server still supports SSLv2
|_http-methods: No Allow or Public header in OPTIONS response (status code 404)
|_http-title: 3ware 3DM2 - computer - Summary
| ssl-cert: Subject: commonName=computer/organizationName=3ware Inc./countryName=US
| Not valid before: 2011-10-28 18:44:48
|_Not valid after:  2021-10-26 18:44:48
902/tcp   open  ssl/vmware-auth        VMware Authentication Daemon 1.10 (Uses VNC, SOAP)
16001/tcp open  fmsascon?
Service Info: Host: computer; Device: storage-misc

Note that if you perform the nmap command, you may lose connectivity to the GoPro and will have to restart both the WIFI BacPac and the main body of the GoPro.

5) Connect your PC to the BacPac's wireless access point (WAP) at 10.5.5.9.  The default password is GoProHero.  Important note:

*** In order to send commands to the GoPro via its Cherokee Web Server, you need to connect the computer on which you have a running browser to the wireless access point that the GoPro creates in step 4 ***

6) After you've connected to the BacPac WAP, you'll need to instruct the GoPro to start streaming.  You can do this by sending commands to the webserver that is running on port 80.  Here is the command to start the stream:

You'll need to substitute the password (default or your own custom password) for the GoPro where I've written WIRELESSPASSWORD.  The %02 says.."start streaming".  You can turn off streaming by sending the same command, only using %01 on the end:

Someone compiled a list of commands that you can send to the GoPro using the web interface.  I've listed those commands at the bottom of this post.  Also, an enterprising gentleman has created a Windows app to send the various commands to your GoPro while it is in wireless tablet mode: http://cam-do.com/WiGo/.

7) If you've started streaming, validate that the little files that represent the streaming are posted to the Cherokee webserver by visiting this URL:

You should see the files listed in the browseable directory.  This means that the stream is running and that you can capture it:

8) Once the stream is running, use FFplay (a media player bundled with FFmpeg) to validate the stream works:
[sodo@computer ~]$ ffplay -i http://10.5.5.9:8080/live/amba.m3u8
ffplay version 0.10.6 Copyright (c) 2003-2012 the FFmpeg developers
  built on Dec  1 2012 12:21:08 with gcc 4.7.2 20120921 (Red Hat 4.7.2-2)
...
[applehttp @ 0x7fa0600008c0] Estimating duration from bitrate, this may be inaccurate
Input #0, applehttp, from 'http://10.5.5.9:8080/live/amba.m3u8':
  Duration: N/A, start: 144.977167, bitrate: N/A
    Stream #0:0: Video: h264 (Main) (HDMV / 0x564D4448), yuv420p, 432x240 [SAR 1:1 DAR 9:5], 29.97 tbr, 90k tbn, 59.94 tbc
 154.92 A-V:  0.000 fd=   0 aq=    0KB vq=   21KB sq=    0B f=0/0   0/0 

I've bolded some interesting things about the stream:
* the format is Apple's HTTP Live Streaming (HLS) format (FFmpeg calls this "applehttp"
* the stream is H264 compression
* color model is yuv420p
* size of the stream is 432x240
* framerate is 29.97

Here's a snap of the output


Holy crap..it works!  The best thing about this is now you can capture the output to a file like so:
ffmpeg -y -i http://10.5.5.9:8080/live/amba.m3u8 -c:v copy -an test.mp4

The above command captures the stream and copies it (-c:v copy) in the same format it was sent.  For my purposes, this was best as I did not have to re-encode the stream.  I simply "copied" the stream to the hard drive of my Linux box.  This saved CPU processor for the other two streams I captured.

I put this stream together along with the other two streams in a sample video of my new drum kit.  Only bummer is that the GoPro stream of my kick pedals is that it is so low rez:


If anyone has a clue as to how to edit the firmware to increase the resolution of the stream that gets saved to webroot of the Cherokee web server, please let me know.  I'd love to be able to suck up a video stream that is higher rez than 420P.

So that's it.  I'd really like to be able to tweak the firmware of the GoPro Hero 2 in order to increase the resolution of the stream to capture, but it looks like that hasn't been fully accomplished yet unlike Magic Lantern for the Canon 5D.  Though folks are trying!

Enjoy,
TAG

References
http://ffmpeg.org/ffplay.html
http://ffmpeg.org/ffmpeg.html
http://goprouser.freeforums.org/howto-livestream-to-pc-and-view-files-on-pc-smartphone-t9393.html
http://cam-do.com/WiGo/
http://goprouser.freeforums.org/wifi-backpack-sniffing-t7993-10.html
http://0fyi.wordpress.com/2007/02/23/live-capture-of-a-digital-video-stream-using-dvgrab-and-ffmpeg/
http://chdk.setepontos.com/index.php?topic=5890.0
http://goprouser.freeforums.org/gopro-hd-bus-interface-discussion-and-projects-f23.html

Apple's HLS Streaming Format
http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-HLS-%28HTTP-Live-Streaming%29-78221.aspx
http://developer.apple.com/library/ios/#documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/UsingHTTPLiveStreaming/UsingHTTPLiveStreaming.html
http://www.iis.net/learn/media/live-smooth-streaming/apple-http-live-streaming-with-iis-media-services
http://www.3ivx.com/technology/windows/metro/http_live_streaming.html

Commands you can send to the GoPro while Phone/Tablet Streaming is active
Update 2013/04/19
Added the correct syntax for password entry..my original cut-and-paste included HTML characters that broke the display of the commands in Blogger
*** end update ***
The general query structure is: 
http:////?t=[password]&p=

Where: can be bacpac or camera.

Queriable Information
Turn on camera : http:///bacpac/PW?t=[password]&p=%01
Turn off camera : http:///bacpac/PW?t=[password]&p=
Change mode : http:///bacpac/PW?t=[password]&p=%02

Start capture : http:///bacpac/SH?t=[password]&p=%01
Stop capture : http:///bacpac/SH?t=[password]&p=

Preview
On : http:///camera/PV?t=[password]&p=%02
Off : http:///camera/PV?t=[password]&p=

Mode
Camera : http:///camera/CM?t=[password]&p=
Photo : http:///camera/CM?t=[password]&p=%01
Burst : http:///camera/CM?t=[password]&p=%02
Timelapse : http:///camera/CM?t=[password]&p=%03
Timelapse : http:///camera/CM?t=[password]&p=%04

Orientation
Head up : http:///camera/UP?t=[password]&p=
Head down : http:///camera/UP?t=[password]&p=%01

Set Resolution of Video Recorded to the Device
WVGA-60 : http:///camera/VR?t=[password]&p=
WVGA-120 : http:///camera/VR?t=[password]&p=%01
720-30 : http:///camera/VR?t=[password]&p=%02
720-60 : http:///camera/VR?t=[password]&p=%03
960-30 : http:///camera/VR?t=[password]&p=%04
960-60 : http:///camera/VR?t=[password]&p=%05
1080-30 : http:///camera/VR?t=[password]&p=%06

FOV
wide : http:///camera/FV?t=[password]&p=
medium : http:///camera/FV?t=[password]&p=%01
narrow : http:///camera/FV?t=[password]&p=%02

Photo Resolution
11mp wide : http:///camera/PR?t=[password]&p=
8mp medium : http:///camera/PR?t=[password]&p=%01
5mp wide : http:///camera/PR?t=[password]&p=%02
5mp medium : http:///camera/PR?t=[password]&p=%03


Photo Resolution in Black Edition
12mp w : http:///camera/PR?t=[password]&p=%05
07mp w : http:///camera/PR?t=[password]&p=%04
07mp m : http:///camera/PR?t=[password]&p=%06
05mp m : http:///camera/PR?t=[password]&p=%03



Delete (may only work on Black Edition..not confirmed)
Last
http:///camera/DL?t=[password]&

All (Format)
http:///camera/DA?t=[password]&


Timer
0,5sec : http:///camera/TI?t=[password]&p=
1sec : http:///camera/TI?t=[password]&p=%01
2sec : http:///camera/TI?t=[password]&p=%02
5sec : http:///camera/TI?t=[password]&p=%03
10sec : http:///camera/TI?t=[password]&p=%04
30sec : http:///camera/TI?t=[password]&p=%05
60sec : http:///camera/TI?t=[password]&p=%06

Localisation
On : http:///camera/LL?t=[password]&p=%01
Off : http:///camera/LL?t=[password]&p=

Bip Volume
0% : http:///camera/BS?t=[password]&p=
70% : http:///camera/BS?t=[password]&p=%01
100% : http:///camera/BS?t=[password]&p=%02

Sunday, December 02, 2012

"yum check" always helps..

A couple weeks ago, I was successful in upgrading my Fedora 15 system to Fedora 17 using PreUpgrade.  It worked well for me, outside of the usual NVidia funkiness on my machine (virtual consoles don't work on my SC1430).  However, I did notice a bit of detritus in my installed programs list when I tried to do my most recent "yum update".

Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:
       
         1. You have an upgrade for kernel-tools which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of kernel-tools of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude kernel-tools.otherarch ... this should give you an error
            message showing the root cause of the problem.
       
         2. You have multiple architectures of kernel-tools installed, but
            yum can only see an upgrade for one of those arcitectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.
       
         3. You have duplicate versions of kernel-tools installed already.
            You can use "yum check" to get yum show these errors.
       
       ...you can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).
       
       Protected multilib versions: kernel-tools-3.6.8-2.fc17.x86_64 != kernel-tools-3.3.4-5.fc17.i686


So I ran "yum check" to find the issues:
[sodo@computer bin]$ yum check

Loaded plugins: langpacks, presto, refresh-packagekit
kernel-tools-3.6.7-4.fc17.x86_64 is a duplicate with kernel-tools-3.3.4-5.fc17.i686
1:kmod-nvidia-2.6.41.10-3.fc15.x86_64-280.13-2.fc15.16.x86_64 has missing requires of kernel-uname-r = ('0', '2.6.41.10', '3.fc15.x86_64')
1:kmod-nvidia-2.6.43.8-1.fc15.x86_64-280.13-4.fc15.6.x86_64 has missing requires of kernel-uname-r = ('0', '2.6.43.8', '1.fc15.x86_64')
netbeans-6.9-3.fc15.noarch has missing requires of netbeans-platform12 >= ('0', '6.9', None)
netbeans-apisupport-6.9-3.fc15.noarch has missing requires of netbeans-platform12 >= ('0', '6.9', None)
netbeans-ide-6.9-3.fc15.noarch has missing requires of netbeans-platform12 >= ('0', '6.9', None)
netbeans-java-6.9-3.fc15.noarch has missing requires of netbeans-platform12 >= ('0', '6.9', None)
xorg-x11-drivers-7.4-8.fc17.x86_64 has missing requires of xorg-x11-drv-nouveau

To clear these errors, I:
- removed netbeans, as I don't use it
- added the nouveau driver
- removed the older, Fedora 15 kmod-nvidia drivers
- removed the older, duplicate kernel-tools (3.3.4-5)

Once that was done, I re-ran "yum update" and no error messages appeared!  I was good to go for launch.
TAG
Feel free to drop me a line or ask me a question.