IT Project Leader, Arts and Sciences Computing
441A Williams Hall
I help administer the public computing resources for the School of Arts and Sciences at the University of Pennsylvania. Prior to this, I was the local support provider for the faculty and staff in Claudia Cohen Hall, formerly Logan Hall. I started working for Penn with SAS Computing (SASC) in the summer of 2005 under Dr. Jay Treat and John MacDermott. Eventually, my role in SASC expanded in the summer of 2007 to include assisted support for classroom and public computers under Albert Matthews. In 2011, I took over IT support for the College of Arts and Sciences office, also located in Claudia Cohen Hall. During the fall of 2012, Albert Matthews took a position outside of the university. I applied for the recently-vacant position and after the traditional interview and hiring process, I was awarded the job of IT Project Leader near the end of 2012.
From there, I transitioned to a new, greater, role in SASC. I became the immediate supervisor for the LSPs in Claudia Cohen Hall and the Center for Advanced Judaic Studies (CAJS). In addition to that, my tasks were to work with other SASC staff in Multimedia Services to administer the computing resources in public labs, central pool classrooms, and kiosks. I was also assigned to a new team in SASC, WINS, that will coordinate the administration of SASC servers and their resources. It is through WINS that I was given my first big project outside of classroom support, implementing a virtual desktop infrastructure (VDI) solution for SASC public computers.
Over the next year, I worked with Penn’s ISC group to implement a hosted VDI solution for SASC, based on VMware Horizon View. We piloted View in a couple of public labs and smaller grad student labs with moderate success. The biggest problem with the virtual desktops we have encountered is multimedia performance. The sound quality streamed from online sources is poor. Aside from the sound issues, VDI has worked well. The versatility in it’s approach to desktop computing offers easy maintenance, and easy reconfiguration. Currently, SASC has over fifty VDI computers available in it’s public areas.
While I was the LSP for Claudia Cohen Hall, patching the various Windows and Apple Macintosh computers was a never-ending process. SASC didn’t really have a good tool for updating both PCs and Macs with decent reporting. In mid-2014, ISC started to offer IBM’s Endpoint Manager (IEM) as a service to the university for patching and administering both PCs and Macs. SASC formed an implementation project to adopt IEM as a replacement for the longstanding Patchlink application, of which I was co-chair. During the next year, IEM was adopted to fit SASC’s needs through the efforts over a dozen people. Procedures were developed for creating a custom IEM installer, distributing that installer to client computers, organizing patching baselines, developing software installation packages to distribute new software and update existing software, and to provide a basis for reporting on client computer configuration to the managing LSPs.
We immediately adopted IEM for management of the 200+ classroom computers in SASC, building it into our yearly imaging process. This gave us a great increase of capability in management that was not available before. Going beyond routine patching, the ability to remotely deploy software, BIOS updates, configuration changes became a new part of our workflow. The same could be said for the Apple Macintosh computers. Both platforms finally had the same management capability. New technologies like Wake-on-LAN, and complete client computer reporting made administering public computing resources much easier.
Every year public computing routinely re-images the computers in the central pool classrooms with the latest versions of the popular software applications. This process grew out of a need to update those computers en masse from a point where client computer configuration was non-existent, before the adoption of IEM. The idea is for a user to have the same computing experience in any SAS central pool classroom, no matter what building they are in. The favorite tool for computer imaging across many IT shops has been Symantec’s Ghost suite of applications. As time has progressed, newer versions of Microsoft Windows were released along with newer ways to configure and distribute them. Ghost was never upgraded along with those new products. As a result, using Ghost became more precarious each year as newer versions of Windows, specifically Windows PE, became increasingly more incompatible with Ghost. A new solution had to be found.
With the introduction of Windows Vista in 2007, Microsoft completely re-engineered the way Windows was customized and distributed. A way completely different from it’s predecessor, Windows XP. This new process evolved into the Microsoft Deployment Toolkit (MDT). MDT is a utility for customizing the distribution of Windows Vista and above, though Windows XP was supported as well. In February of 2015, I started getting to know MDT as a possible solution for the Ghost question. IEM offers it’s own deployment tools, but they are based largely on MDT. MDT offered a few things to us that Ghost did not. First among all of these options was the use of a hardware-neutral source image. With Ghost, the drivers for hardware on Windows were captured along with the operating system itself. MDT relies on Microsoft’s sysprep utility to strip all unique components from Windows before the operating system is captured as an image. During the deployment process, MDT uses WMI to detect hardware and choose the correct drivers to install, from a common driver store.
In addition to the hardware-neutral image, MDT can use a pre-execution environment (PXE) from Windows Deployment Services (WDS), a separate service included with Windows Server, to boot target computers from the network. Ghost required the use of a boot partition, applied with a boot key to start the imaging process. A Ghost agent is available for use, but wouldn’t work with advanced versions of Windows PE that shipped with Windows 7 and later. Our last operations with Ghost saw us visiting each computer to start the imaging process with a boot key. MDT in combination with IEM would allow a client computer to be booted from the network, where it would connect to a pre-configured ,automated MDT deployment share for imaging. That is how we imaged central pool classroom computers during the summer of 2015. ISC helped us configure PXE boot across the various network subnets and I would kick-off the imaging process from my office with IEM. Best of all, MDT is free from Microsoft. SASC need not pay yearly Ghost licensing fees any more.
Out of over 200 imaging sessions with MDT, not one failed due to network problems or computing hardware. The biggest issue was DNS lookup failures to the deployment share in some buildings. Since then, SASC as a whole, not just public computing, has decided to adopt MDT for its overall imaging solution, replacing Ghost. I’ve used MDT to create new master VMs for VMware View, image kiosks, and create new VDI endpoint computers with the help of group policy. Now, the trend is to do as little local configuration on the Windows client and perform all customizations using domain-based GPOs.
During the summer of 2015, SAS, along with the most of the university, sign a site-wide software license with Microsoft. This new license gave SASC access to newer software tools, previously available to only enterprise customers. The summer of 2016 will see us in public computing, most likely, deploy Windows 10, and Office 2016 to the classroom computers. MDT will be essential to that process as will IEM.
Great insight and articles you’ve written.
I’ve perused over a few of them and particularly like the reasoning behind your choices. Like yourself, previously a Ghost user, then moved to WDS, and settled with MDT – moving away from any sort of thick imaging.
I am interested in what configurations you make with GPOs and hope you write something up.
Will be visiting often…..just so busy myself.
Thanks! I appreciate you taking the time to read through some of the content. GPO settings could be very good food for a future series of posts.
Excellent blog, I would like to invite you in fabebook “MDT Facebook Group” where We talking about MDT tool
I share my blog in spanish http://blogs.itpro.es/octaviordz
I hope to see you soon in facebook
I was wondering if you ever completed a build out for Windows 10 LTSB and the steps to mass deploy it?
Yes. I’ve been deploying LTSB for a few years now without any significant trouble.
Question: Once you have completed the image how long does it take to actually write it to a PC or laptop? I have been using a cloning software program to deploy my images in about 7 minutes but it requires a lot of time on the back end by building it to that specific machine and I see your solution as a cleaner solution with a better golden image with a wider array of machine driver setups. My ultimate goal is to knock the time down that we spend on installing our client machines from 35-40 mins compared to 7 mins by eliminating the pixie boot process and just dropping an image directly onto the machine. So before I build your setup and learn all the ins and outs, what’s your write time?