MacBook Pro – Gaming Rig


I love my rMBP. I have the “Late 2013” model, which means I’m one generation back from current – the “Mid 2014” model.

OS X with Citadel

OS X with Citadel

The difference is essentially that I have the 2.3 GHz (i7-4850HQ) with 6 MB on-chip L3 cache processor, instead of the 2.5 GHz (i7-4870HQ) with 6 MB on-chip L3 cache processor. All the other specs are effectively the same between versions.

With that processor, 16 GB of onboard RAM, a 512 GB SSD, and an Nvidia GeForce GT 750M with 2 GB GDDR5 memory video card, this sounds like a reasonably decent spec level to play video games. The issue of course being that most games are made for PC instead of Mac.

I haven’t been a gamer for a while (15 or so years). However, my son is now old enough to play Mass Effect, which is a series that I’ve wanted to play for almost as long as since the first game was released. My son currently plays it on his Xbox 360, and although it is fun to spend time with him while he is playing and we discuss strategy and options, I wanted to play as well. I had no interest in purchasing another game console, so that meant I would be playing the PC version. As my only “personal use” computer is my MacBook, that meant a Windows install. There are many ways to run Windows onto a Mac now. I use, or have used, most of them, so this goal wasn’t frightening.

Virtualization is the easiest way to run Windows on a Mac. The user continues to run OS X, and the Windows instance gets to live in a Type 2 (software with underlying OS) Hypervisor. Parallels, VMware Fusion, and VirtualBox are all hypervisors that I’ve used in an OS X environment. For the last few years I’ve been using VirtualBox exclusively on the various Macs I’ve owned to host my virtual machines. With the purchase of my current rMBP, I’ve added the change of running my virtual environments from SD cards, so as not to take up valuable real estate on the onboard SSD.

Where the VMs live

Where the VMs live

These cards host my Windows XP, Windows 7 x64, Windows 8.1 x64, and Ubuntu virtual machines. Since there are 2 VMs per card and only one SD slot in the Mac, with this method I can only run two of my virtual machines at a time.

My trusted virtualization model doesn’t work in this case anyway, as the games need to talk directly to the hardware. That means no virtualization – the Windows OS needs to be installed as a local OS.

Apple supports a local Windows install very easily with their Bootcamp product. Bootcamp will allow the user to partition the local hard drive, and then allows the user to select the boot partition (Windows or OS X) when the computer posts by holding down the option key. The negative to this model is that I did not want to sacrifice any of my precious SSD space to a Windows partition. The 512 GB is all I have at the moment, and there are no aftermarket drives available for the late-2013/mid-2014 rMBPs for expansion. Apple uses a proprietary non-M.2 PCIe blade SSD.

Now I’m to my third requirement (after no virtualization, and don’t partition my onboard storage) – I have to run this Windows installation from an external drive.

Bootcamp is no help. Bootcamp does not support installing / booting Windows from an external drive. However, there are several people who have done this with slightly older Macs, and I was able to take their work and make small changes for the current rMBP.

First, ignore this post:

Windows To Go

There are several reasons this is a bad choice for this operation. You have to use a USB 3.0 drive that is certified for WTG. This is a real need, not marketing – the USB stick has to present itself as an internal disk. WTG requires volume activation (no retail users allowed). Finally, even if you do built it, configure it for UEFI boot, and otherwise make it all happy, the Mac won’t boot to it anyway.

Here is the first useful post:

How Not to Install Windows on your Mac’s External Disk

This is a great / fun read that goes over the differences between BIOS and EFI, as well as explains why many of the things you’re going to want to try won’t work. He DOESN’T go as far as to explain how to actually accomplish your task.

Here is the second useful post:

install windows 8.1 to external disk

This one kind of works, but it resulted in a lot of bugginess for me. Your mileage may vary. It is very “cut to the chase” but it doesn’t give a lot of detail and hand holding for non technical users. 

Here’s the third and most useful post:

Mac: Install Windows 7 or 8 on an external USB3 or Thunderbolt drive without using bootcamp

Yay! Helpful info! Let me save you a little bit of time. First, you can’t install Windows 7 on a USB3 drive, and if you were thinking about installing on a USB2 drive, then moving to a USB3 enclosure, the rMBP only has USB3 ports, so it won’t boot to it anyway. Second, you should go for the Lacie thunderbolt drive, not USB. It’s faster, and works better for the install process. For me it was as easy as following the steps in that post (using an existing Windows 7 machine – on USB) to make the external disk UEFI bootable, deploying the installation image, and then booting to it. I did download the bootcamp drivers to that disk as we’ll, to allow installation of the hardware when I knew I wouldn’t be able to get on the Internet due to the NIC not being recognized.

If you don’t want an EFI partition on both your internal storage and your external disk, this post also looks interesting:

Guide: create external Windows 7 boot drive for Macbook

I couldn’t test it, as my internal disk is encrypted. Having an EFI partition on both disks does’t really bother me, but it should work.

With the information I’ve linked to, a thunderbolt / USB combo drive, and a copy of Windows 8.1, I now have a working Windows 8.1 install for my MacBook, and have been using it for games for about 3 and a half weeks. So far no blue screens, or any unusual behavior. I call this endeavor a success.

Windows 8.1 with Liara

Windows 8.1 with Liara

 And here we have the happy gaming machine…

OSX Yosemite – first week (not so) fun


Yosemite is characterized by granitic and remnants of older rock. Perhaps that’s why Apple chose that as the name for their latest operating system. Anything that turns a working computer into a rock should have a relevant name.

I’m just ranting a little bit here. I recently upgraded my Late 2013 rMBP to OS X 10.10 Yosemite, and it returned the favor in enticing me to help dust off my troubleshooting skills.

All that being said, the root cause of my issues technically wasn’t part of OS X nor the Apple ecosystem, but I wasn’t happy after my upgrade just the same.

The problem I faced made my Mac extremely difficult to use. Within 2 – 10 minutes of a power on, any running application became unresponsive (pinwheel at mouseover, could not perform a force quit, if Activity Monitor was already open then any apps I had running would show application not responding, and the finder itself would also show application not responding). I was also unable to open any additional application once the laptop reached this state. The behavior existed both after an upgrade to OSX 10.10 or after a clean install (once my applications were also installed). The only way to be able to function again was after a hard reset.

I tried to diagnose/repair using all the usual suspects:

Configuration changes I tried or checked  included:

  • Verified FileVault was disabled – encrypting the drive will slow your machine until that process is complete
  • Reduced transparency in accessibility options – one common thread in people who reported slowness issues was the thought that it was graphics controller related, so reducing overhead may help in come cases
  • Disabled graphics switching in power options – again, if there is a graphics controller issue, staying on one controller or the other may help
  • Reduced the number of items that Spotlight was indexing

None of these helped. Although I consider myself a power user, I don’t tend to have many applications running at a time. Typically just Safari, the Microsoft Office 2011 Suite, Terminal, and Remote Desktop. With a Core i7, 16 GB of RAM, and an SSD with 50% free space, it was painful to watch my rMBP struggle just to paint the screen with a minimal number of applications open.

Still, everything pointed at the Finder or a graphics issue. It seemed unreasonable to presume that several applications were having a simultaneous problem.

I downloaded and ran EtreCheck and saw nothing unusual. I went through all of my applications to see if there had been any version updates in the prior two weeks since I had last checked. One of them had – Dropbox – for a Finder related issue no less.

I downloaded and installed the (beta) update, hoping that it would resolve my problem. It didn’t, but it did make me take a long hard look at the only two Launch Daemons that had been consistent through all of my changes.

The first was Dropbox itself. The reason it was always consistent was that although I could switch browsers, productivity suites, shell applications, and RDP tools, I couldn’t imaging living without Dropbox. I use it to keep everything synched across 8 devices on 4 different operating systems (iCloud isn’t a good fit for me). It turns out that Dropbox modifies the Finder to add green checkmarks to files that signify that they have synchronized. Turning off that feature doesn’t impact Dropbox functionality, but it also isn’t a preference to be set. Users have to remove the resource from the Dropbox app with the following commands:

sudo rm -rf /Library/DropboxHelperTools

rm /Applications/Dropbox.app/Contents/Resources/DropboxHelperInstaller.tgz

The second daemon was DisplayLink. I use the DisplayLink application to drive USB and Ethernet graphics devices – usually for displays that are physically distant from my laptop. I’ve used it for years, and never had an issue. It turns out that now they have an issue. It wasn’t the issue I was having, but I’d found another app that was having compatibility problems with the Finder in Yosemite, and that made it suspect.

After removing DisplayLink and disabling the Finder modifications in Dropbox all my of issues with Yosemite have disappeared (other than that it’s ugly). My rMBP doesn’t run hot, hang, or have trouble painting the screen. I’ve had to disable auto update in Dropbox to avoid possibly reintroducing my problem, but that’s a small price to pay.

I’m confident that I won’t have to wait long to be able to use both of my problem child applications again. Each of them is mainstream and under active development.

But I’m not rushed.

Why Train? Confidence.


There have been several people posting the “Train people well enough so they can leave, treat them well enough so they don’t want to,” quote lately, and attributing it to Richard Branson. I happen to agree with this sentiment, and even if Mr. Branson didn’t say it, he seems to run his companies like he believes it.

If a captain of industry operates his companies by this rule, and everyone who comments on it seems to agree, why do so many companies not train their staff? I’ve seen three reasons that leap out to me.

The first reason seems to be cost.

Since the economic downturn of the late 00’s, organizations have taken the stance to cut costs in any and all ways possible. Labor is a high dollar item, and is often one of the first areas scrutinized for reduction. Ten years ago we saw simply the elimination of professional development. Organizations stopped reimbursing tuition, stopped sending employees to conferences, stopped paying for certifications, or stopped even providing formalized training of the organizations internal processes and procedures.

Once the items outlined in reason one became the norm, the second reason came into play, which seems to be a belief that training is unnecessary.

The belief that training is unnecessary stems from first, the expectation that if the job seeker market is saturated with people that have the skills and require no training, why would a company want to put forth the extra effort and money to train someone that doesn’t have the skills? Companies decided that they would only hire individuals who came “pre-qualified” with training and experience. When companies were able to continue operating under that model, they moved into providing no training for new hires with or without prior experience. After all, why would anyone have been hired if they couldn’t do the job? The good people would “pick it up” and the people who couldn’t (no matter how complex the job) weren’t worth keeping anyway.

The third reason seems to be turnover.

Training is an investment. Why would a company invest in an employee who could take that investment with them when the employee moved on? Companies certainly wouldn’t want their competitors to benefit from any investment made in staff that might someday go work for that competitor.

Since these reasons exist, why is there any incentive to train?

Confidence gives people charm, the courage to fail, emotional security, and the ability to keep their head in bad situations. Confidence allows your staff to face all kinds of situations, both good and bad, knowing “I can handle this.”

Here’s what happens when you don’t help your team build that confidence:

My wife and I recently had occasion to purchase the Cash Passport Prepaid MasterCard for our son who was traveling out of the country. The only place we could easily purchase this product was at the Travelex Foreign Currency Exchange office at Sky Harbor Airport. This office offered two services. They performed currency exchange, and they sold this single product. In fact, the entire office was covered in advertising touting the MasterCard product.

There was only one young lady working in the office that day. My wife and I told her that we wanted to purchase the prepaid MasterCard. She was horrified. She explained that she had only worked there a week, had never sold one before, and didn’t know how. She was very nice, very professional, and deeply sorry for the inconvenience, but our transaction still took 30 minutes, and she had to call her manager at home 4 times.

This was a person who wanted to help, but wasn’t equipped to do so. My wife and I were very patient, but it was obviously an uncomfortable experience for this young lady.

Later that day we also want to pick up a Blu-Ray for our daughter at Fry’s Electronics. They advertise price matching, so we brought our selection to checkout with the appropriate product up on the Amazon mobile site. Even though there were price matching signs everywhere, and Amazon typically does price at a level lower than brick and mortar stores, the cashier was unable to approve the price match. 2 “managers” had to be engaged to approve the price change, while the young man checking us out stood to one side and watched helplessly. He wasn’t nearly as enthusiastic about trying to help us as the young lady at Travelex, but he was equally unable to provide a basic service advertised by the organization he worked for, and our relatively simple transaction took about 10 minutes.

Is this how your team performs? Do they wow their customers? Do they have everything they need to provide amazing levels of service? Do they take risks? Do they engage? Are they confident that they can help the next customer they talk to, or do they look at each person in line to be helped as one more hurdle they have to jump to get through to the end of the day?

If your team doesn’t have what they need to amaze their customers, both internal and external, that is a demoralizing experience that will show through in each and every transaction. I know that when I’m a customer, and I see staff that are obviously adrift, all I can think about is that if the company won’t take care of their own staff, they obviously aren’t going to take care of me.

“Welcome to Company X. I probably won’t be able to help you.”

WSUS, Drive Space, and Pain


Today’s annoyance started earlier this week when I happened to notice that the server I run WSUS 3.0 SP2 on at home was starting to run a little low on disk space. A few minutes of checking revealed that yes, it was the WSUS directory that was the culprit (at over 120 GB). No worries, thought I. I’ll just run the WSUS server cleanup wizard and all will be well.

Checking on the server a couple hours after firing off the wizard revealed that very little progress had been made. The progress bar had moved perhaps 3% towards completion, and seemed to be hung on “deleting unused updates”. I thought that, perhaps the process was hung, perhaps something was holding a file open, or perhaps the server hiccuped. I stopped the process, rebooted the server, started it again, and went to bed.

By the next afternoon there was more progress – to perhaps 10 or 11%. Since I’m fairly patient, the server wasn’t in immediate danger of running out of space, and the process was progressing, I decided to wait it out. Four days later the process aborted at just under 60% completion.

OK, the “wait it out” method didn’t seem to be working. A few quick Google searches revealed many admins recommending that you run the cleanup wizard often (weekly) to prevent just such an occurrence caused by an overly large WSUS file store, and correspondingly large database. Thanks guys.

Since the “cleanup everything” method didn’t seem to be working, I tried individual options in the cleanup wizard to see what would work. “Decline superseded updates” worked without error. “Decline expired updates” and “Delete computers not contacting the server” also executed quickly and flawlessly. “Delete unneeded update files” took about 40 minutes, but it also executed, freeing up about 6 GB of space in the process.

Since the issue seemed to be one of efficiency (server was running too slowly to execute such a large process), I went looking to see what I could to to either have it not work as hard, or have less to do.

That in mind, I attacked was the disk itself. This server has been running for about 3 years now, and since my home sever supports a whopping 4 users, it doesn’t get a lot of preventative maintenance or performance tuning. WSUS uses the Windows Internal Database (formerly SQL Server 2005 Embedded) as the back end engine, and I’de never given it any attention. First, I took a look at the database files on the disk. I discovered the database was about 10 GB, and the log was almost as large at 8 GB. This made sense as again, the files had been growing on demand for nearly 3 years.

I downloaded SQL Server Management Studio Express to take a look at the utilization of the files. You can download the version for 2005, but I went ahead with 2008 R2 instead to stay closer to current. Installing the software was no problem, and then I just needed to connect to the database engine.

Once you open the management studio, there are 2 caveats to connecting to the internal database engine. First, the only configuration for connectivity is named pipes, so the server name needs to be in the format of: \\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query. Then, you need to be logged in as a server administrator and use Windows Authentication. You’ll get a logon failed error unless you execute the console with a “run as administrator”.

Once I had access to the database engine, I selected the WSUS database, and shrank the log and database files. I reclaimed 60% from the database and 97% from the log. Since autogrowth had over time created fragmentation and diminished performance, I then used SQL Server Configuration Manager to stop the Windows Internal Database, in preparation of defragging the server.

With SQL stopped, I also stopped the IIS Web Server, and the Update Services. WIth everything related to WSUS offline, I then defragged the disk with Defraggler.

Defragmentation took most of a day, after which I restarted the server. Upon restart I re-ran the wizard only selecting “Unused updates and update revisions”. The process still took about 4 hours to run, but it did finish, and freed up about 70 GB of space.

I’ll be automating the cleanup wizard tasks in the future to avoid having this happen again…

New Toy – Kindle Fire HD 7″


I’m just pulling my head up from about 8 mind bending months, and have decided my mind is a little too bent.

In an effort to unbend at least a little, I went out and indulged myself a bit today, and purchased a Kindle Fire HD.

I’ve written about the original Fire, and the positives still hold true:

1) Cheap
2) Small and light
3) Built in could storage

It still doesn’t come with Google Play, cellular connectivity, or a reasonable UI, but the following items are fixed:

1) OS is now a more recent version of Android – 4.0 Ice Cream Sandwich (Yes, I know – not Jelly Bean)
2) Bluetooth
3) Camera
4) 16 GB of storage (I still use Dropbox, so this was more of a nice to have than a need).

Now we have new positives! The screen compares favorably to the iPad retina display. I’ve been watching Dr. Who episodes on Amazon Prime all evening, and the device plays them as well as my MacBook Pro – and far better than my original Fire. The new physical design is thinner, lighter, easier to hold, and you no longer blind yourself with glare. Although some reviewers have said the apps are laggy, I have found it to be significantly snappier than my original Fire. (The 1.2 GHz dual core CPU and 1 GB of RAM are large increases over the version 1 model).

I still don’t like the Amazon UI overlay, but the good news is that it’s no longer so slow that it makes you cry. The bad news is that it’s still so ugly it makes you cry.

This is more of “hey, I got a new Kindle” than an actual review – but so far, I have to say I endorse it. As with the original Kindle Fire, the biggest benefit to the Fire HD is the Amazon ecosystem behind it – especially if you have Amazon Prime. It’s comfortable, it’s a value proposition, it looks and sounds great.

Is it an awesome computing device – absolutely not. For that, and to stay in the price point, you could go to the Nexus 7, but then you have to trade the Fire HD’s screen, speakers, and extra storage to get that UI and the full Android environment.

The fire works for me. Other than media consumption, I use a tablet for e-mail, note taking, and Facebook. Everything else, I go to the MacBook Pro. The Fire does all of these things just fine…

Verizon misses customer service opportunity…


I recently accepted a position with a new employer, and with that position came a company issued cell phone. I’ve been managing my own phone for a long time. With my last several jobs up until the one right before this one, I would simply expense the portion of my mobile bill that applied to my individual phone. Now, for the first time in a long while, it was “Here’s your phone”, as opposed to “This is how you expense your phone”.

Since I am now the somewhat disgruntled owner of an iPhone 4, there seemed no need to keep paying close to $100 a month to also have my Droid 2, so I went to the Verizon Wireless store to cancel it.

I’d been a Verizon customer for several years, and had been using this phone for about 2 of those, so I expected no issue in having the phone shut off. I wasn’t closing the account, as my wife and son would continue to have their service through Verizon. I expected the entire process to be fast and painless. Surprise! It wasn’t.

First, I was told that I was under contract on my phone until early 2014. I was a bit surprised by that. The agent explained to me that when my son washed his (no features) phone, and my wife had it replaced with another (no features) phone, Verizon used my smartphone’s reduced price upgrade / renewal instead of his. So, I had ended up paying $100 to get a standard phone, and also extended my 2 year old phone’s contract out an additional 2 years. The agent then informed me that for this to happen was not at all unusual – that it happens all the time. I asked that since he could see what had happened, could it be fixed? It wasn’t doing me any good to have my son’s new phone already eligible for an upgrade. I was told no, we would have had to catch it when it happened. We didn’t, so we’re locked in, and I have to pay a cancellation fee.

Since the agent was unwilling to do what I believed made sense, I asked him what he suggested. Was there any way I could avoid a cancellation fee? He said no. However, his suggestion was that if I wanted to move my existing number to a standard phone, there would be no charge, and the new cost would only be $9.99 per month. My daughter doesn’t have a phone, so I asked him to show me the cheapest standard phone they had. He did – it was $150. So, to cancel would be $155, and to move to a less expensive service would be $150, plus $9.99 per month for a 2 year minimum. That made the decision fairly easy – I spent the extra $5, and cancelled my service.

Yesterday Verizon customer service called to ask why I cancelled, how it went, and if they could do anything to bring me back. I told them I couldn’t think of anything. (They did try to upsell me on additional services though).

So, here are the fails:

1) Verizon made an account error (using the wrong phone’s upgrade eligibility). They were able to see that, and were unwilling or unable to fix it.

2) Follow up call for no other reason than to have made the call. The caller had no information as to why I cancelled or how the cancellation process had gone, but someone somewhere decided they should call all cancelling customers. That’s fine, but call with a suggestion before leading with “How can we bring you back”. I had explained earlier how to keep my business, but it had been turned down. To call later and ask the same question is more annoying than good customer service – but it lets someone place a mark on a checklist somewhere.

As usual, we’re giving lip service to good customer service, but not actually empowering employees to provide it.

There is a good article on Forbes relating to the same issue. And I can relate the author’s pain when trying to cancel XM radio after I traded my car in for one without a satellite receiver.

One more example that leads me to conclude that the company that actually gets customer service right will have a huge advantage over their competition.

Take two tablets and call me when you’re ready for three…


We’ve been a one tablet household for nearly two years now. That tablet, my wife’s iPad, has been her primary computing device for nearly the entire time she’s owned it. The iPad pushed her laptop to her desk, and her desktop to the garage. For anything less than actual content creation such as largish documents or web and graphics work, she almost never goes to her Windows machine.

Since my computing needs tend to extend beyond content consumption, I have always carried my laptop with me. However, more and more opportunities seem to have arisen lately, such as watching TV, waiting for a child to finish an activity, or generally any moment where pulling out the laptop was enough inconvenience, that I just choose not to do it. My Droid 2 (that I still love) filled in that gap somewhat, but with the small screen and relatively short battery life, it just wasn’t good substitute for a dedicated device that wouldn’t leave me without a phone when the battery died.

So, I started thinking about a tablet for myself.

The first question was iPad or Android tablet? I decided pretty quickly that I would go Android. Although my primary computing device is a MacBook Pro, and my wife loves her iPad, I couldn’t bring myself to go iPad. First, I consider the iPad to be far too expensive – especially for the occasional use I envisioned. I went expensive on my primary computing device in my MBP, That was enough, I don’t need (and honestly couldn’t afford) to have every device in my life to be at the high-end premium level. Second,  I  have some investment in Android apps. They didn’t cost much, but as I will continue to use an Android phone, I don’t want to have to buy every app I want once for Android and once for iOS.

Since I wanted Android – what tablet did I want? After evaluating several online, as well as using the local Best Buy as a showroom, I really liked the Galaxy Tab 8.9 (probably because the Tab 2 7.0 wasn’t on display yet). It was reasonably light, snappy, had a vivid screen, and overall seemed comfortable to use. The problem I had with it was the same as I had with the iPad – price.

OK, given that price was always going to be my sticking point, what was the cheapest tablet I could find? I really wanted something that I wouldn’t cringe when handing it to my 9 year old daughter. Using that criteria – there was only one – the Kindle Fire. With refurbs from Amazon going for a little over $100, they were practically disposable. If I bought one and didn’t like it, I’d just return it without remorse.

Now, before everyone cringes at the thought of using the Fire as their primary tablet, here were my main pros and cos of the device:

Pros

1) PRICE
2) Did I mention price?
3) The 7″ display (10, and even the 8.9 was a little large for me)
4) Cloud storage via Amazon

Cons

1) OS – uses a customized (crippled) version of Gingerbread
2) No Google Market (Google Play), can only buy apps from Amazon Market
3) Only 8 GB storage
4) No bluetooth
5) No cellular connectivity
6) No camera

The pros don’t need a lot of discussion. I am notoriously cheap, so price is a huge factor. I liked the size, and would have ended up with a 7″ tablet if at all possible anyway. The cloud storage is nice, but honestly in the week I have used the device, I’ve never used the Amazon cloud in favor of Dropbox.

As for the cons:

Amazon’s OS. Hated it. It was EXTREMELY responsive mind you, I just hated the UI. Easily fixed – Go Launcher will replace the default UI without even rooting the device. It took under a minute to install. It doesn’t change the underlying OS, but it does change the interface to something almost exactly like the Android 2.3.4 version on my Droid 2.

No Google Play was also a little annoying, but also fairly easy to overcome. Putting Google Play apps on the Kindle Fire (sideloading) isn’t difficult at all. In the device menu of the Fire, turn on “Allow Installation of Applications from Unknown Sources”. Now all you have to do is get the .apk files to the Fire. You can do this via USB, but I find that cumbersome. My method is to use the “Astro Files” file manager on my Droid 2 to make a copy of any app I want on the Fire. After backing up an app using Astro Files, the .apk the file ends up in \backups\apps on the SD card. From there I move it to the Dropbox folder, and voila, the .apk is available to the Fire. Click to install from Dropbox, and Angry Birds lives on the fire without paying another $2.99.

Only 8 GB storage. Can’t do much about that with no media card reader, but between Dropbox and USB I can’t see this being an issue with a device that will primarily be used for e-mail and web browsing.

No bluetooth also doesn’t have a workaround, but I only use a bluetooth headset for phone calls, and haven’t felt a lack yet.

No cellular is actually a benefit for me. I don’t want to pay for another data plan, and if I did, it would be far more useful to enable my phone as a hotspot then in buying a data plan for all of my devices individually.

No camera is the only item I have actually felt the lack of so far.

All in all, I feel I’ve ended up with a decent Android tablet. Once I consider the price, it is a screaming awesome tablet. (I love how that works). Amazon has just announced the next version of the Fire should be out soon. Will I buy it again? Probably not. If I have to go to the $199 full price, then the Samsung Galaxy Tab 2 7.0 at $249 overcomes all the negatives with only a $50 differential, and then the Fire can be passed on to my daughter who already claims an ownership stake in it anyway…

Deploying Windows 7 With Stone Knives and Bearskins


So, one of the IT corporate objectives for 2012 was the deployment of Windows 7 to the userbase – in a virtual environment.

We couldn’t go virtual everywhere of course. We have sales people and other traveling folks who use laptops. We also have developers and such who have a great many monitors, and who need horsepower at the desktop. However, since 80% of our on site staff are “call center” types, virtual would  be a perfect fit.

As usual, plans changed at the last minute.

I started pricing out the hardware – windows terminals (I was leaning towards HP) in the cube farm, and Dell back end because all of our other servers are Dell, and I didn’t see the need to go overly crazy with my hardware spend. We don’t currently have a SAN, so storage would have been the biggest part of the hardware expense. I also planned to go XenDesktop because of the negative experiences we had had with VMware View when I was with Wright Medical – particularly with local USB printers, which we have a great many of.

The cost was creeping higher – but nothing terribly unexpected. However, I do work for a company where we buy almost all of our technology equipment used or refurbished from either the Dell Outlet or from Dell Financial Services. Needless to say, we’re very price sensitive.

My manager found some Dell OptiPlex 790s available on the DFS site. These units had 8 GB of RAM, Core i5 processor, and were the ultra small form factor. These were $640, with an additional 25% off coupon. At that price point, they were significantly cheaper than the virtual solution. With this new option, the Windows 7 migration was changed to be a desktop replacement as opposed to a migration to a virtual environment.

With my objective adjusted, I now needed to come up with a deployment plan. Our environment isn’t terribly large – under 100 workstations would be deployed. When I was with Wright Medical and Warehouse 86, I would multicast with Ghost. When I was with IT Workshop, we never had deployments large enough to need multicasting.

“Back in the day” at FiestaNet I created an “ad hoc” imaging environment using DOS USB boot disks using the universal TCP/IP network bootdisk from netbootdisk.com and Ghost 7. I know how to drop updated DOS drivers into the boot disk, so unsupported network cards isn’t an issue. I would boot to the boot disk, map to a share on a Windows server, and pull the image across. This is a little more problematic with Windows 7 and Windows 2008 R2 servers because first, you can’t map to a 2008 R2 network share from a DOS client without some security policy changes on the 2008 server, and also because you’ll need to do a quick repair on the Windows 7 client after the image comes across because otherwise it won’t boot if deploying with such an old version of Ghost (the partitions will be off by one). Still, it works (I use it at home for builds and rebuilds), and I may document it out one day just because it is funny that something I put together in 2001 still works – especially since Ghost 7 in no way is supported for Windows 7 deployments.

I knew I wasn’t going to get a commercial deployment tool approved for this project. I also (honestly) didn’t think I’d need one for a deployment this small. So, this would be done with free tools. Next, would I be moving images over the LAN, or performing the installs locally? Due to the limited space in the IT work area, I could only prep 4 workstations at a time. 4 at a time meant no need for multicasting. Also, since I carry 10 USB sticks in my backpack ranging in size from 8 to 32 GB, I decided that I would just do everything from stick as opposed to adding the delay of  performing the install across the LAN. So, no need for Windows Deployment Services (although I did think it would have been fun to try it).

So, basically I’m going to install Windows 100 times using images and installers created using the Windows AIK.

I downloaded the WAIK, and since my workstation is 64 bit, installed the 64 bit version from the DVD using the wAIKAMD64.msi installer. Next I created a bootable USB drive by using the following steps:

Create bootable USB

Click Start, point to All Programs, and then click Microsoft Windows AIK.
Right-click Deployment Tools Command Prompt, and then click Run as administrator.
Type copype.cmd amd64 C:\winpe_amd64 – press ENTER.
Type copy C:\winpe_amd64\winpe.wim C:\winpe_amd64\ISO\sources\boot.wim – press ENTER.
Type copy “C:\Program Files\Windows AIK\Tools\amd64\ImageX.exe” C:\winpe_amd64\ISO\ – press ENTER.
Type diskpart – press ENTER.
Type list disk – press ENTER.
Identify the USB stick (usually by size – in this case it was #2).
Type select disk 2 – press ENTER.
Type clean – press ENTER.
Type create partition primary – press ENTER.
Type select partition 1 – press ENTER.
Type format fs=fat32 quick – press ENTER.
Type active – press ENTER.
Type exit – press ENTER.

Next, I converted the file system from FAT32 to NTFS with the command convert H: /fs:ntfs. I did this to support the WIM files I would create which would be larger than the 4 GB file size limit for Fat32. I converted instead of originally formatting the sticks as NTFS because formatting as NTFS would cause the format to hang. Converting the file system after the fact always worked, so that is the process I followed.

Finally, I used the command:

xcopy /s C:\winpe_x86\iso\*.* H:\ (because my USB drive again was “H”)

Now I have a bootable USB stick with which I can copy images off of workstations for redeployment using ImageX.

Create Images

With the environment out of the way, I needed to create those images. I decided I needed three different images. One image included Office 2010, one image did not include Office, but did include Outlook, and one included neither Office nor Outlook, but did include OWAtray with the expectation that the user would use OWA. All images were fully patched and updated, and also included things like Java, Flash, Shockwave, PDF readers, antivirus, and Firefox.

Creating the image was always the same process. Install everything, patch everything, and then sysprep the system.

The sysprep process I used was as follows:

Click start, and type , type C:\Windows\System32\sysprep\sysprep.exe in the search box and press enter.

You then get this:

System Preparation Tool dialog box

Sysprep needs to be performed twice, so be careful to perform the steps in the right order.

BEFORE THE FIRST SYSPREP, THE DEFAULT ADMINISTRATOR ACCOUNT NEEDS TO BE ENABLED AND IT STILL NEEDS TO BE NAMED ADMINISTRATOR. Audit mode logs in as administrator, and if it cannot, then the result is a system that cannot be logged into.

In the System Cleanup Action list, select Enter System Audit Mode.

In the Shutdown Options list, select Reboot.

Click OK to restart the computer in Audit mode.

After the restart, Windows 7 automatically logs in as Administrator – if it cannot, then you can go no further.

This session is used to delete any and all accounts and profiles that were needed to install software.

Once that is complete, run sysprep again, and this time perform the following:

Open Sysprep.

In the System Cleanup Action list, select Enter System Out-of-Box Experience (OOBE).

Select the Generalize check box.

In the Shutdown Options list, select Shutdown.

Click OK

This is now an image that can be captured and redeployed.

Capture Image

This is an easy part – boot to the created USB stick, and use it to capture the image locally to the stick. The PC needs to be booted to the USB stick by either changing the USB boot order in the BIOS, or using the one time boot selector (usually F12).

Once the PC has booted to the memory stick, use ImageX to capture the image.

In my case, the command I used was as follows:

F:\imagex /compress fast /check /flags “Professional” /capture D: F:\install.wim “Windows 7 Professional” “Windows 7 Professional Custom”

Where “F:” was the memory stick (confirmed using “dir f:”) and “D:” was the partition with the Windows installation (confirmed using “dir d:”).

ImageX is the command-line tool in Windows 7 that you can use to create and manage Windows image (.wim) files. Compress specifies the compression type: maximum, fast, or none. Check verifies the integrity of the .wim file. Flags is required if you are going to deploy the .wim file with Windows Setup (I did). Otherwise you do not need to specify flags. Capture is the actual collection of the image. D: is what partition, F:\install.wim is where to save, and what to name the .wim file (hopefully you’re using at least a 16GB USB stick in this case), “Windows 7 Professional” is the name of the new .wim file, and “Windows 7 Professional Custom” is the description.

In my case, it took about 20 minutes to capture the image.

Create Deployment Media (using bootable USB)

Follow the same steps as above (everything but to create a bootable USB stick. (I did this 4 times).

In an elevated command prompt:

Type diskpart – press ENTER.
Type list disk – press ENTER.
Identify the USB stick (usually by size – in this case it was #2).
Type select disk 2 – press ENTER.
Type clean – press ENTER.
Type create partition primary – press ENTER.
Type select partition 1 – press ENTER.
Type format fs=fat32 quick – press ENTER.
Type active – press ENTER.
Type exit – press ENTER.
Convert the file system to NTFS.

Now insert your Windows 7 Volume Licensing disk into your optical drive. (Or mount the .ISO, or whatever method you choose to get to the install files).

In the elevated command prompt window, type xcopy /s D:\*.* H:\*.*, where D is the drive letter of the Windows 7 Volume Licensing media (optical drive) and H is the drive letter of the USB stick you just formatted and made bootable.

In the elevated command prompt window, type xcopy /r J:\install.wim H:\sources\install.wim, where H is the drive letter of the USB stick you created in the previous step and J is the original USB stick with ImageX. (Or you could have previously copied that install.wim file to another location). If prompted, type Y to confirm that you want to overwrite the file.

Eject the USB stick containing your new install files, and you are ready to deploy.

Deploy Image (using bootable USB Deployment Media)

Boot the PC to the deployment USB stick.

Follow the prompts to install Windows 7.

That’s really all there is, so here are the caveats:

We have Key Management Service servers for our Windows 7 keys, so the workstations will self-activate (no need to enter the license key).

I didn’t use an unattend.xml file to apply settings instead of entering them at setup. First, this is because it wasn’t a large deployment, and I could only do 4 at a time. The extra few mouse clicks didn’t slow me down – I was always waiting on the next computer. Second, as our naming convention is to use the service tag as the computer name, I had to type that in on every computer anyway. Joining the domain was no additional trouble, and everything else we customize we apply through Group Policy.

Didn’t “copy profile”.  Our environment is very plain vanilla, and even using Windows Easy Transfer to move the profiles, the other person doing this with me was able to put new machines on desks as quickly as I was creating them.

The whole process essentially took 2 weeks from when we got the hardware until all the hardware was in use. Not bad, especially since this wasn’t the only thing we were working on…

Here User, User, User…


When, like today, the 5:00 AM wakeup call comes in that someone cannot get to the Internet, it is always nice to have a little information – like what computer the user is on, where can’t they go, etc.

So, of course this morning’s call contained none of that. Just so-and-so can’t get to the Internet, please fix. Click.

Not having the computername, I had to go through the chore of finding the computer based on the username logged into it. Fortunately, there are a ton of ways to do so:

1) Back in the days of WINS I could use winscl.exe command. (Sorry, we don’t have any WINS servers now). Not really a choice in this example.

2) I could set a Domain Policy to audit account logon events, and then look at the logs on all my domain controllers. It works, but unless I have a tool to consolidate my logs, (I don’t), it can be time consuming to find the domain controller that authenticated the user, and the workstation that sent the logon request.

3) PSLoggedOn from Sysinternals (Microsoft) is a great little tool, but since it won’t scan every machine in my network in one pass, it isn’t perfect. If the machine I need is in the first several dozen, great! If not, I’m out of luck.

4) NBTscan is a great tool for this kind of thing, and I have used it often.

It gives you great output like:

C:\nbtscan>nbtscan 192.168.0.100-200
Doing NBT name scan for addresses from 192.168.0.100-200

IP address NetBIOS Name Server User MAC address
——————————————————————————
192.168.0.101 WKS-01 <server> Bob 12-34-ba-c0-52-32
192.168.0.109 WKS-02 <server> Sam 00-0f-1f-b3-b5-89

C:\nbtscan>

When I have had occasion to use it, it has never let me down.

5) Spiceworks is also a nice system, not specifically for this, but if you are sweeping your network with it, the inventory function will tell you the last logged on user of a given workstation.

6) In today’s case I used User Locator. This tool is one of my favorites. Not only does it return a list of computer(s) that the user is logged onto, but it can bind tools to the remote computer for one-click management of that computer. You can download it for free at http://www.motivatesystems.com/User_Locator.asp

Anyway, there are many ways to find out what computer a user is logged into. These were just the choices I ran down on the way to my selection this morning. You may have free or pay tools you prefer, but this is just another example of how many ways there are to do the same thing in IT. (The best way, of course, is to get the user to tell us what workstation they’re on in the first place…)

Synchronizing IIS on Windows 2008 R2 to Apache on OSX using Dropbox


I’ve hosted websites in my home for a long time. I started back in the mid 90’s when I was a partner at FiestaNet (now ViaWest). First I was on dialup with a static IP, then ISDN, and finally DSL. The nice thing was that up until 2001, although I had to pay for the connectivity, my bandwidth was free.

Since FiestaNet was an all Microsoft shop, I started with IIS 1.0 on Windows NT 3.51. Basically, it could host static pages. Since no one ever went to my site (www.visible-spectrum.com), it didn’t matter all that much.

Over time I moved through every version of IIS. 2.0 came with NT 4.0. Then the upgrade to IIS 3.0 followed, finally replaced by 4.0 in the NT Option Pack. Windows 2000 brought is IIS 5, 2003 IIS 6, 2008 IIS 7, and finally Windows Server 2008 R2 and IIS 7.5. I put my web server first behind Proxy 2, then ISA 2000, 2004, and 2006. The latest version, Threat Management Gateway 2010 protects my home network even now.

This kept up through many employers, several homes, and a few cities. I always upgraded a little ahead of whatever company I happened to be working at the time. That way I could have some hands on experience before using new versions in production at the workplace. It was a great way to have a lab that I couldn’t neglect or let fall out of “fully functional”.

This finally came to an end in late Summer of 2010. We were selling our home in Germantown, Tennessee, and did not yet have a new home lined up in Phoenix. My wife and children came out ahead of me to a rental home, and I followed once all the various tasks involved with selling the house were complete. Since my servers had no connectivity (or even a home) for nearly 3 months, it was time to move my sites to a hosting provider – in this case GoDaddy.

My sites have been with GoDaddy for about 17 months now, while first we bought a home, remodeled it, moved in, and eventually got around to such tasks as setting up the server room. Now my sites could have a place to live again, but I have not yet purchased a static IP, so for now the sites are still at GoDaddy. I’m torn between having to maintain uptime at home (since CenturyLink’s connection seems fairly unreliable) and the fact that AreMySitesUp reports downtime from GoDaddy at least once a day (as I’m sure they are fairly oversubscribed in the servers that provide the dirt cheap hosting package I purchased).

Since I don’t have a static IP, I can’t easily remote into home to get any files when I need them. (Yes, I could use DynDNS or something like it, but I hate spending money). That also means I can’t use my home web server for development if I am not actually at home.

If only there were a way to work on my sites at home in a dev environment and still have them available to me to play with when I am not at home. Oh wait, there is. Dropbox. (Please sign up and get me more free storage!)

So, my sites are on my IIS web server at home, with the WEBDEV folder shared into my Dropbox account. That means those files are also available on my MacBook Pro all the time, with real time updates no matter which device I make a modification on. The sites are hosted on IIS to internal users at home. Now I just need to be able to serve them locally from the MacBook when I’m not at home. Sounds easy, right?

OK, let’s use my http://www.joking.net site as the example. When I am at home, and I go to http://www.joking.net, I go to the public site GoDaddy. If I go to http://www.joking.dev, I to to my internal IIS server. That works for any device on my LAN. If I leave my LAN, and go to http://www.joking.dev, I get a “server not found” error and I am sad. That is because the joking.dev zone lives on my home DNS server, but nowhere else.

First, I need to make http://www.joking.dev resolve from my MacBook Pro, even when I am not on my LAN. That’s pretty easy, I just need to add http://www.joking.dev to my HOSTS file and have it point to 127.0.0.1. Since my virtual machines all use bridged connections, when I am at home I can use Windows 7 to see http://www.joking.dev on IIS, and OSX to see http://www.joking.dev on the local  instance of Apache on the MBP.

I’m running OS X 10.6.8 Snow Leopard, so I have a couple of options.

I can open a terminal window, and type

sudo nano /private/etc/hosts

enter my root password

and I get the hosts file, which I can edit.

If you prefer to use the finder, use the go to folder function, go to /private/etc and open HOSTS. I recommend TextWrangler as the editor if you’re going to go that route. I’m a big fan of Notepad++ in Windows, and TextWrangler is the closest thing to an OS X equivalent.

In this case I add

127.0.0.1 www.joking.dev

and save the file.

Now to actually set up the Apache webserver on the MPB.

Apache isn’t enabled by default, but it is very easy to enable. First, go to System Preferences::Sharing and check “Web Sharing”. Otherwise open a Terminal window and enter

sudo apachectl start.

If you go through the Terminal you will again be asked to enter the root password. At this point if you open

http://localhost/

in a browser you should see the text “It works!”.

The directory you are seeing when browsing http://localhost is:

/Library/WebServer/Documents/

In my case my user files are available at http://localhost/~josephking/ and that directory is located at:

/Users/josephking/Sites/

which is fine, but not where my web site files are.

All of my website files reside at paths like:

/Users/josephking/Dropbox/WEBDEV/joking.net/www/

Where WEBDEV is a directory synchronized from my Windows server.

Since I don’t want to move WEBDEV to be under /Sites/ and I don’t want to change the default pathing in Apache, I create a symbolic link in Sites to WEBDEV by opening a terminal and typing:

ln -s /Users/josephking/Dropbox/WEBDEV /User/josephking/Sites

which creates the link, and if I go to the Sites directory in the Finder, I see the link to WEBDEV.

If I go to http://localhost/~josephking/WEBDEV/ however, I get the following:

Forbidden

You don’t have permission to access /~josephking/WEBDEV/ on this server.

Which is because Apache needs permission to not only WEBDEV, but the entire chain all the way to WEBDEV from the root, and also because Apache needs to be configured to follow symbolic links.

First, permissions:

I need to enter the following in a terminal window:

chmod 755 /Users/
chmod 755 /Users/josephking/
chmod 755 /Users/josephking/Dropbox/
chmod 755 /Users/josephking/Dropbox/WEBDEV/

which then lets Apache actually get to where I want it to go.

Next, to allow symbolic links:

I edit my username config file for Apache at:

/private/etc/apache2/users/josephking.conf

by making it look like the following:

<Directory "/Users/josephking/Sites/">
Options +Indexes +MultiViews +Includes +FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all
</Directory>
<Directory "/Users/josephking/Dropbox/WEBDEV/">
Options +Indexes +MultiViews +Includes +FollowSymLinks
AllowOverride None
Order allow,deny
Allow from all
</Directory>

I then save the file and restart apache by opening a Terminal window and typing:

sudo apachectl restart

and now I can see all of my site documents listed at

http://localhost/~josephking/WEBDEV/joking.net/www/

How 1995 of us! Look at that URL!

Almost there, just a few changes.

First, I want to go to that directory using http://www.joking.dev/

I go into httpd.conf at /private/etc/apache2/ and I remove the # in front of :

Include /private/etc/apache2/extra/httpd-vhosts.conf.

I then edit httpd-vhosts.conf at /private/etc/apache2/extra/ by adding:

<VirtualHost *:80>
 ServerAdmin joseph.king@joking.net
 DocumentRoot "/Users/josephking/Dropbox/WEBDEV/joking.net/www"
 ServerName www.joking.dev
 ServerAlias joking.dev
# ErrorLog "/private/var/log/apache2/dummy-host.example.com-error_log"
# CustomLog "/private/var/log/apache2/dummy-host.example.com-access_log" common
</VirtualHost>

I actually added a lot of these, one for each site I have set up in the HOSTS file, but you get the idea.

Last, I go back to httpd.conf for a couple quick tweaks:

modify DirectoryIndex

<IfModule dir_module>
 DirectoryIndex index.html index.htm index.shtml index.php default.html default.htm default.shtml default.php
</IfModule>

to list all my default documents

Remove the “#” in front of

LoadModule php5_module libexec/apache2/libphp5.so

and

LoadModule fastcgi_module libexec/apache2/mod_fastcgi.so

to enable PHP and CGI.

Restart Apache again, and voila!

I can get to http://www.joking.dev/ from my MacBook Pro on the local Apache server. If I had other machines who needed to see it, I could edit their hosts files or use DNS to point to my laptop as well.

Easy, right? Now everyone will want to sync IIS and Apache…