Notes on swapping the internal Xbox One hard disk for a solid-state disk

In a desperate effort to improve the Xbox One user experience (UX), I experimented with swapping out the Samsung Spinpoint M8 5400 RPM hard disk for a solid-state disk (SSD). My hypothesis at the time was that the disk, because of its slow speed, must be very busy and that swapping it out would yield a noticeable performance boost to the operating system and improve overall UX.

It didn't.

But a few folks on Twitter expressed interest in reproducing the experiment so here are my notes. (Thanks for your patience, Stefán!)

Hardware Considerations

  • All Xbox consoles ship in storage configurations (e.g. 500GB, 1TB) that have this configuration burnt into its flash memory. This configuration is used to rebuild the partition layout on the internal disk in recovery scenarios, presenting an obstacle for a quick and easy disk swap.

    That means an Xbox One 500GB will always want to restore a partition layout compatible with a 500GB disk.

  • All Xbox consoles ship with a SATA2 controller. The Xbox One S ships with a SATA3 drive but the same controller, limiting the drive to theoretical SATA2 speeds. This part swap was likely due to the scarcity and current cost of SATA2 disks.

  • The wireless chipset reports antenna status to the operating system. If this is not plugged in, you will not be able to complete the out-of-box-experience (OOBE), even in a wired configuration.

Software Considerations

  • Encrypted container use is common on this platform. This presents a hypothetical hardware configuration data persistence problem when migrating data from one disk to another.

  • The boot loader appears to maintain state about previous successful/unsuccessful boots, which could lead to unexpected behavior when swapping disks. (More testing is needed in this area.)

  • Anti-rollback protection is present and used, preventing use of older versions of the OS after an update. This can invalidate hard disk backups very quickly.

  • The disks are set up with a standard GUID Partition Table layout, with strict validation of both header and partition array CRC32 checksums.

  • All partitions must also be assigned a well-known GUID. This should not be confused with the partition type GUID.

  • I did not test if the disk identifier was also used/validated, but it's not unreasonable to assume such.

  • I did not test if the backup GPT header (backup LBA) was used/validated.

  • The GUID Partition Table entry order is not validated. You can re-order partitions, given you continue to meet the requirements above.

    As the Temp and User partitions are likely to be most busy, it makes sense to stuff those on the larger, outer hard disk tracks. Their adjacency also allows for short disk head travel for the inevitable back and forth.

    But programmatically generating the User partition on non-standard disks can be tricky due to being positioned in the middle of the table. Some opted for simplicity and round to the nearest gibibyte and ignore what's left. PowerShell made this easy to implement accurately but it's not necessary.

    It's possible I missed the one application that retrieves an enumerable list of partitions and selects a partition using a hard-coded index. But the odds of that kind of code surviving a code review at Microsoft are high.

  • Software updates to the Xbox One OS may not be compatible with the new storage, I have yet to receive an update to my test device. I did, however, enable dev mode successfully with no side effects.

Tools Used


  1. Disassemble the Xbox One and remove the disk.
  2. Use a USB to SATA bridge to copy the contents of the disk to the PC.
  3. Use partitioning script to ready a blank SSD for Xbox One use.
  4. Copy the contents of the disk (i.e. each partition) to the newly prepared SSD
  5. Plug the SSD into the Xbox One and boot.
  • The Xbox One may exhibit odd behavior at this point. It may boot but report free space incorrectly, or it may not boot at all. This is the hypothetical hardware configuration data persistence problem I was referring to. To fix this, I reset the console.
  1. Restore the Xbox One to its factory defaults. Be sure to complete OOBE and gracefully shut down the Xbox One after that's completed.
  • At this point, Xbox One has restored the original partition layout on the disk, which is not what you want. (See considerations above.) But the characteristics of our disk should be implanted within a container somewhere, which is great.
  1. Remove the SSD from the Xbox One.
  2. Use a USB to SATA bridge to copy the contents of the SSD to the PC.
  3. Use partitioning script to wipe and ready the SSD for Xbox One use again.
  4. Copy the contents of the SSD back to the prepared blank SSD.
  5. Plug the SSD into the Xbox One and boot.
  • The Xbox One should now report the available free space correctly and everything should be functioning normally. Be sure to read the considerations above for potential future gotchas.

Adding the "Aero Glass" blur to your Windows 10 apps

Since the reintroduction of Aero Glass in Windows 10, I've been receiving questions on how to incorporate that functionality into 3rd party applications. A few nights ago I looked into it and here's my guidance:

  1. Abandon code that uses DwmEnableBlurBehindWindow. This function hasn't been deprecated, oddly enough, but is effectively dead on Windows 10. Stop using it. (Also consider abandoning DwmExtendFrameIntoClientArea.)

  2. Start using SetWindowCompositionAttribute directly. It's not officially documented but here's the plumbing you need, if you're writing C# utilizing Interop Services tooling:

  3. On the view side, you don't need to worry about chroma keys anymore! Simply ensure your window uses a (background) brush with an alpha channel and the compositor will handle the rest.

The sample project I used to create the screenshot above can be found on GitHub.

Have fun!

Running your own app at the click of the Surface Pen button

Thursday, I picked up a Surface 3 — my first Surface with pen input — and was surprised at the lack of customization options for the pen's top button. Searching around, I found some clever hacks using AutoHotkey and EventGhost but I wasn't really interested in installing middleware. So I took a peek under the hood and found an inbox solution instead.

Thankfully, Microsoft was nice enough to bake in some overrides for which app gets launched. These overrides, unsurprisingly, live in the Lockscreen ClickNote component (lockscreencn.dll).

Upon every click of the pen button, this component reaches out to a registry key, specifically:


It looks for an AppID or DesktopAppPath value and if one is found, retrieves its data and executes an immersive or desktop app accordingly. Otherwise, OneNote is launched via a built-in AppID:


Before you go stuffing Notepad or your app into the DesktopAppPath value, be aware that the path is passed through GetFileAttributes to test for existence. And before passing through ShellExecuteEx, a command line argument of /screenclip, /fromlockscreen, or /hardwareinvoke is tacked on, depending on how the button is clicked or the state of the Surface at time of click. So, for example, if you want to avoid Notepad complaining about the lack of a /hardwareinvoke.txt, you will want to wrap it in a script.

I haven't spent much time on the immersive app side, so am eager to see what folks do in that space. I plugged in FreshPaint's AppID, for example:


... and had mixed (but mostly positive) results.

Free idea corner: PowerPoint could benefit from button-based slide navigation. An app-aware button remapping hub would be cool too.

Microsoft is embracing and extending Wi-Fi Display

As part of a recent open specifications update, Microsoft has revealed it's extending the Wi-Fi Display specification to improve a number of wireless display scenarios. These extensions are described in the MS-WFDPE-Preview and MS-WDHCE-Preview specifications and include additions like a low-latency stream to carry mouse cursor data, a method of managing desired latency, and better error reporting.

Let's take a look at each of the additions.

Dynamic resolution and refresh rate

In scenarios where the source device changes video stream resolution or refresh rate -- think gaming -- devices normally require Real-Time Streaming Protocol (RTSP) renegotiation or, more often, freak the hell out and require you to restart your streaming experience. To smooth that over, Microsoft is introducing a method of detecting these changes (and a way for devices to report they support such). More specifically, devices that report support for this feature will monitor the H.264 stream's sequence parameter set/picture parameter set (SPS/PPS) for changes in resolution and frame rate and will adapt seamlessly.

Latency management

When it comes to latency, we typically think lower is better. But that's not always the case. For example, gamers require low latency to minimize input lag. Because video frames are pumped through as fast as possible, it's common for some tearing or artifacts to appear. But movie viewers don't care about latency. They want a pixel perfect jitter-free viewing experience. To achieve that, devices may extend their input buffer and hold onto video frames longer, a method that introduces a measurable but completely acceptable amount of latency.

The device manufacturer's dilemma surfaces here: Do they optimize for gaming? Or for casual movie viewers? Or do they release two SKUs of the same hardware with slightly tweaked software?

To overcome this huge pain point, Microsoft is introducing a capability for devices to receive a "latency mode" from source devices. The idea is that the source will have the context and responsibility of communicating the user's intended use of the wireless display. For example, the source could detect which app is in use (e.g. Windows Media Player or Microsoft PowerPoint) and send the appropriate latency mode (e.g. high or low, respectively).

Separate mouse stream

Wi-Fi Display is pretty simple in terms of its inputs. It supports one stream that is chock full of audio and video data. That works great for movies but not so much for scenarios involving input. And that's especially true for a mouse.

Microsoft is introducing a capability that will eliminate the move-the-mouse-and-wait game by decoupling the mouse from the video completely. This works by enabling a source device to send a separate mouse stream to a target device. The receiving side would then be responsible for combining the mouse cursor data with whatever is being displayed on the screen at the time.

If this sounds familiar, that's because Microsoft already does something very similar as part of its Remote Desktop Protocol (RDP).

Error reporting

You're streaming a game then poof, the stream is dead. What happened? From the perspective of the source device, you lost connection to the target. It knows something happened. But that's all the information you're going to get. Good luck troubleshooting that.

Microsoft is introducing a more formal method of reporting error details back to the source device. Supported devices will be on guard for "teardown" requests and provide reasons to enhance diagnostics and improve overall usability.

Richer metadata

Microsoft is also opening up some metadata enhancements made by Intel for its Intel Wireless Display (WiDi) solution. (These enhancements are listed in the tightly controlled Intel WiDi Specification.) Devices can use Intel-defined fields to report back rich metadata such as a friendly name, support URL, version, and logo.

Device support

From an operating system perspective, most of these features are available for use in Windows 10 Technical Preview. But my testing indicates no devices currently implement the new capabilities. This is likely a sign of a Microsoft Wireless Display Adapter update on the horizon.

Wi-Fi Display dongles and associated latencies table

I've written about the Wi-Fi Certified Miracast program (and related Wi-Fi Display specification) before, so I'll spare you the intro. But what you may not know is that for about two years now, I've been testing and collecting wireless display dongles, an esoteric hobby for sure. Per request, I put together a chart of my devices and associated latency observations and am sharing that today.

When I refer to latency, I'm talking about the time it takes for an image on a source device (e.g. a Surface Pro) to appear on the target device (e.g. TV).

My test parameters are as follows:

  • Source device: Surface Pro
  • Target device: LG 47LG70
  • Distance between devices: 3ft
  • Resolution: 1080p @ 30fps

Testing involves running a simple program that displays a counter and loops through a movie trailer that covers over 90% of the screen to exercise motion compensation algorithms and observe quality and latency hits, if present. I then take a picture with a DSLR of both the source and target (in the same frame) and subtract the counts to determine latency.

        <td><a href="">MOCREO iPush</a></td>
        <td><a href="">$41.80</a></td>
        <td><a href="">3.0.0-rc1</a></td>
        <td><a href="">Netgear Push2TV (PTV3000)</a></td>
        <td><a href="">$48.99</a></td>
        <td><a href="">2.4.53</a></td>
        <td><a href="">Yes</a></td>
        <td><a href="">Samsung Wi-Fi AllShare Cast Hub</a></td>
        <td><a href="">$68.98</a></td>
        <td><a href="">Lenovo Wireless Display Adapter (WD100-SL)</a></td>
        <td><a href="">$29.51</a></td>
        <td><a href="">Belkin Miracast Video Adapter (F7D7501)</a></td>
        <td><a href="">$59.99</a></td>
        <td><a href="">2.51</a></td>
        <td>Yes, but unstable</td>
        <td><a href="">Yes</a></td>
        <td><a href="">Microsoft Screen Sharing for Lumia Phones (HD-10)</a></td>
        <td><a href="">$69.99</a></td>
        <td><a href="">2.0</a></td>
        <td><a href="">Yes</a></td>
        <td><a href="">Amazon Fire TV Stick</a></td>
        <td><a href="">$39.99</a></td>
        <td><a href=""></a></td>
        <td>Yes, but unstable</td>
        <td><a href="">Yes</a></td>
        <td><a href="">Microsoft Wireless Display Adapter</a></td>
        <td><a href="">$57.98</a></td>
        <td><a href="">Yes</a></td>
        <td><a href="">Tronsmart T1000 Mirror2TV</a></td>
        <td><a href="">$29.99</a></td>
    <tr style="background-color: #ffffdb;">
        <td><a href="">Xbox One</a> 🔧</td>
        <td><a href="">$345.99</a></td>
        <td>Yes, but unstable</td>
        <td>HDMI cable reference</td>
        <td><a href="">$5.09</a></td>
Cost Firmware Avg. Latency (ms) Works with Windows Miracast Certified
Actiontec ScreenBeam Pro $68.99 83 Yes Yes

🔧 Xbox One latency was measured with different hardware (Surface 3), so its number isn't directly comparable to the other devices. This will be fixed when all devices are re-tested with new hardware.


03/31/2015 - Amazon Fire TV Stick now works with Windows, measured latency added

05/01/2015 - Added preliminary latency for Xbox One

05/15/2015 - Modified latency for Xbox One using new Surface 3 hardware

Building a Microsoft Wireless Display Adapter base image

After I picked up (and disassembled) a Microsoft Wireless Display Adapter, I started bugging Microsoft for the source code to the Linux-powered base image -- the underlying host operating system and support infrastructure. After three months of back and forth, the Source Code Compliance Team finally uploaded a buildroot that can be used to generate a base image. Unfortunately, it came with zero documentation.

Here's my general step-by-step on building the base image with Hyper-V and CentOS 7. Feedback is welcome.

  1. Create a virtual machine and install CentOS 7.

    • If you choose to go down the Generation 2 VM route (recommended), don't forget to shut off Secure Boot.

    • Ensure you create a non-root administrative user during install. Mine will be rafael and will be used for all commands henceforth.

  2. Download the [buildroot](\Microsoft%20Wireless%20Display%20Adapter%201.0\September%202014).

    You only need the buildroot package from the Third Party Source Code Disclosures website.

    A little background: Initially, Microsoft uploaded a bunch of smaller buildroot output artifacts. But after some back and forth, I was able to convince them to upload the buildroot itself, making the other files less useful.

    You'll also want to get the zip copied over and unzipped in the virtual machine. The following instructions will assume you unzipped the buildroot folder in the user's home directory (~/buildroot).

  3. Install some prerequisites.

    Buildroot requires a few packages to operate. They can be installed easily via one command:

    sudo yum install net-tools flex which sed make bison binutils gcc gcc-c++ bash patch gzip bzip2 texinfo perl tar cpio unzip rsync wget ncurses-devel cvs git mercurial subversion python bzr

    If you decide to use 64-bit version of CentOS, you'll also need some 32-bit support packages. They, too, can be installed easily via one command:

    sudo yum install compat-libstdc++-33.i686 libstdc++.i686 libstdc++-devel.i686
  4. Build an old ldconfig Due to a bug in newer versions of the GNU C Library, [ldconfig has trouble with ARM architectures]( So we need to grab a working copy and recompile ldconfig for the toolchain. (Alternatively, we could downgrade the entire OS.)

    Let's first set up the required root folder then download and extract the software:

    mkdir glibc-build
    cd glibc-build
    tar -zxvf glibc-2.15.tar.gz

    Now, still in the root folder we created, issue the commands:

    ./glibc-2.15/configure --prefix=/usr

    Compilation will take a few minutes. When it's done, copy the replacement ldconfig into the buildroot toolchain:

    cp ./elf/ldconfig ~/buildroot/output/host/usr/bin/arm-none-linux-gnueabi-ldconfig
    <li>Configure Buildroot 2012.02
    Before we can build the toolchain and packages, we need to write out a .config file. First, enter the buildroot folder and issue the command:
    make menuconfig

    In the configurator that appears, select Load an Alternate Configuration File and provide the following configuration file:


    (Microsoft has provided a slew of configuration files but none of them perfectly match the retail device configuration. For our purposes, however, it's good enough. You're free to select more packages for compilation.)

    Exit the configurator and save your changes.

    Now let's quickly fix up some missing execute permissions and we'll be ready to go:

    chmod +x ~/buildroot/support/scripts/*
    chmod +x ~/buildroot/support/gnuconfig/config.guess
    chmod +x ~/buildroot/support/dependencies/*.sh
  5. Build a base image This part is easy. Still in the buildroot folder, simply issue the command:

    And wait.

    My virtual machine was configured with a mere 1GB of RAM and 4 virtual processors clocking in at 3.4ghz. The build completed in about 10 minutes.

    After the build completes, you'll have a nearly complete root filesystem for the adapter stored in the target folder.

  6. Deployment? Microsoft hasn't provided the scripts or steps necessary to create an actual image to flash onto the Microsoft Wireless Display Adapter, so the instructions must come to abrupt end here.

    If you have root on the device -- coming in a later post -- you can use the toolchain to compile packages and transfer them over to the device.

    Know how to take this further? Or how to connect to/reflash NAND? Or have experience building jigs? Email me. I'd love to pick your brain.

How to set up synthetic kernel debugging for Hyper-V virtual machines

Windows 8 (and Windows Server 2012) introduced a new debugging transport called KDNET. As the name implies, it allows kernel debugging over a network and can be faster and easier to set up than its predecessors (e.g. COM and IEEE 1394).

MSDN has great background information on setting up kernel debugging via Visual Studio and by hand, however, Microsoft's official stance on virtual machine debugging is to continue using the old and slow serial port. Even on generation 2 virtual machines. This makes some tasks, like dumping memory or resolving symbols, a slow and tedious task.

But unofficially you can instead use what's internally referred to as "synthetic debugging".

![Figure 1 - Simplified Synthetic Debugging Architecture](/content/images/2015/02/hv_synthetic_simplified.png)
Figure 1 - Simplified Synthetic Debugging Architecture

To understand how it works, first consider a common KDNET scenario on a physical machine. When KDNET is enabled, the Microsoft Kernel Debug Network Adapter built into Windows takes over (and shares) the physical network device installed in the machine. Communication to and from the machine (and kernel debugger) occurs over the network as expected and life is grand. But virtualized environments (in child partitions) add another layer of abstraction to the underlying hardware that presents a problem -- the Kernel Debug Network Adapter cannot latch directly onto the physical network device.

Cue the magic behind synthetic debugging.

To overcome the inability to directly control network hardware, KDNET was built with smarts to detect virtualized environments and switch to communicating over VMBus -- a inter-partition communication mechanism -- with the host operating system (in the parent partition), as shown in Figure 1. The parent then exposes a KDNET endpoint for you to communicate with the virtualized environment over the network. With this set up, the virtualized environment doesn't require network connectivity!

To set up synthetic debugging (for Windows), you need to be running:

  • Windows 8 or Windows Server 2012+ on the host side
  • Windows 8 or Windows Server 2012+ on the guest side (generation 1 or 2)

Here's the step-by-step:

  1. On the guest machine, open an elevated command prompt and issue the commands:

    bcdedit /dbgsettings NET HOSTIP: PORT:55555
    bcdedit /debug on

    When KDNET kicks in and detects the virtualized environment, the HOSTIP and PORT parameter values are ignored, in favor of the VMBus.

  2. Copy the key value displayed and keep it handy.

  3. On the host machine, open an elevated Powershell instance and run the following script after adjusting the VMName and DesiredDebugPort values:

    $VMName = "Virtual Machine name here"
    $DesiredDebugPort = 55555
    $MgmtSvc = gwmi -Class "Msvm_VirtualSystemManagementService" `
    	-Namespace "root\virtualization\v2"
    $VM = Get-VM -VMName $VMName
    $Data = gwmi -Namespace "root\virtualization\v2" `
    	-class Msvm_VirtualSystemSettingData
    	| ? ConfigurationID -eq $VM.Id
    $Data.DebugPort = $DesiredDebugPort
    $Data.DebugPortEnabled = 1

    Be sure to specify an open ephermeral (49152-65535) UDP port here.

    When the machine is powered on (or reset), the virtual machine's specific Virtual Machine Worker Process (vmwp.exe) will attempt to bind to that port on all (management) interfaces.

  4. Power cycle the guest machine. A reset is not sufficient as it doesn't tear down the Virtual Machine Worker Process.

  5. On any machine on the network, connect a debugger to the Hyper-V host machine with the port and key from earlier. For example, to connect with WinDbg, issue the following command:

    windbg.exe -k net:target=hyperv-host,port=55555,key=a.b.c.d

When not configured correctly, synthetic debugging can be quite a pain to troubleshoot. Here are some tips if you run into problems:

  • Ensure the guest is running Windows 8 or above and the active BCD entry is configured correctly (via bcdedit).

  • Ensure the guest's Virtual Machine Worker Process has bound to the port you specified with TcpView or CurrPorts. This can be extremely problematic if your Hyper-V host is also a DNS server like mine.

  • Ensure no other virtual machines are configured to use the same port.

  • Ensure any firewalls in the way are configured to allow UDP traffic through via specified debug port.

Happy debugging!

New experimental console features in Windows "Threshold"

Microsoft is expected to deliver its first technical preview release of Windows codenamed "Threshold" tomorrowish. And while the usual outlets will be covering the big changes, I wanted to document a relatively smaller set of welcome changes to the Command Prompt (and the underlying Console Host).

Experimental Tab in the new Command Prompt

Here's the new experimental tab in Threshold's Command Prompt properties window. This tab exposes switches that turn on and off new experimental features that apply to all console windows -- including the one that hosts PowerShell.

Let's go over them.

Enable line wrapping selection

In previous versions of the Command Prompt, selecting text for copy involved crudely painting a selection box on the screen and hitting the Enter key. But you weren't done. You also had to paste that text into a text editor to correct the abrupt line endings. It was a terribly slow and error-prone process.

But that's all in the past now.

In Threshold, you can now select and copy text as you would expect to in any text editor.

Filter clipboard contents on paste

Pasted text void of fancy quotes and tabs in the new Command Prompt

Ever pasted in a command only to realize (after it errored out) it was peppered with tabs and fancy quotation marks? I have. And I won't ever again, thanks to a new paste filter in Threshold.

Now when pasting text, fancy quotation marks are converted into their straighter equivalents and stray tabs are removed.

Wrap text output on resize

Text resized in new Command Prompt

Resizing the Command Prompt window has always been an alien task. If you somehow managed to get the window to shrink, a horizontal scrollbar would appear and text would remain static and not reflow or wrap given the new constraints.

With this feature enabled, however, the window and its text behave the way you expect.

Enable new Ctrl key shortcuts

Some handy new keyboard shortcuts have found their way into the new Command Prompt as well. I say some because I can't be sure I covered them all. We'll have to wait for the official documentation to come online.

  • CTRL + A - Select all
  • CTRL + C - Copy
  • CTRL + F - Find
  • CTRL + M - Mark
  • CTRL + V - Paste
  • CTRL + / - Scroll (line) up/down
  • CTRL + PgUp/PgDn - Scroll (page) up/down

Extended edit keys

There's not a lot of information available on extended edit keys. This feature has existed in Windows for quite some time but has never really been exposed to users until Threshold.

We'll have to wait for the official word on this feature.

Trim leading zeros on selection

Leading zeros selected in the new Command Prompt

If you find yourself working with a lot of numerical data in the Command Prompt, you may want to turn this on.

When selecting a number with leading zeros (e.g. via double-click), the selection box will begin after any insignificant zeros present. For example, 000001234 becomes 1234. Hexadecimal and decimal prefaced numbers, however, override this rule. That is, 0x1234 and 0n1234 remain selectable in their entirety.


This one is an oddball.

This slider ranges from a ghostly 30% all the way up to 100% (default). But it affects all console windows on your system and the entire host window, not just the background. (That color is still tweakable via the Properties window, mind you.)

As translucency increases, text readability decreases, so it's not immediately clear who would ever use this. But it's a neat technical demo and nods at the Windows enthusiast crowd that has undoubtly been asking for this for years.