It has been much discussed how imaging Macs is basically dead. Monolithic imaging just isn’t viable anymore (thanks, iMac Pro) and modular imaging unfortunately won’t automatically get you things like User Approved MDM (UAMDM). Check out Armin Briegel’s post on the death of imaging.
At the end of the day however, the goal of deploying a machine as quickly and efficiently as possible all while having to manually touch or configure little to nothing remains unchanged. So how might you do this with a DEP workflow?
This post is meant to be an overview to our new workflow, acknowledging the pitfalls, various hurdles, and acknowledging other methods as well in the hopes of getting your creative juices flowing if you, too, are considering a shift from more traditional imaging workflows to DEP. While we use Jamf Pro, this write up is meant to be MDM-agnostic.
I will write up a more granular series of posts with all our Jamf Pro policies and configurations at a later time once we’ve fully rolled out this larger workflow over the summer months.
How do you ensure regardless of a user being logged in a given Mac that your machines are connected to your Wi-Fi network?
If you are an environment that uses Active Directory, or another network account system, you need to make sure that your Macs are always online so users can login. Windows computers have the benefit of being able to utilize machine authentication, but this functionality unfortunately isn’t natively available on Macs.
Rather than having to implement a Microsoft CA and have each Mac request a cert in order to connect, our environment has developed a solution that achieves the end goal of machine-based authentication for a bound Mac using the following:
A .mobileconfig profile containing a Wi-Fi payload with a placeholder username and password copied to the local machine.
A script that gets the AD computer object name & password from the System.keychain, adds it to the .mobileconfig profile, and then installs the profile on the machine.
While I myself did not come up with this solution, I’ve developed a more streamlined process for deploying our template .mobileconfig profile and script as a postinstall script in an installer PKG.
There are already great guides for how to configure reposado & margarita (the reposado web front-end) on Ubuntu and on Mac. However, neither of these setups gave me everything I wanted in my environment.
Justifications for Docker on a Mac:
Too many web servers: Despite wanting this to run on a Linux server, I couldn’t justify spinning up yet another dedicated web server in our small environment.
Available Hardware & Storage: Unless you are going to manage which individual Apple Software Update catalogs is mirrored by reposado, you’re going to need at least 1TB of storage, as completing a full repo_sync of all available catalogs (as of this writing) takes up a whopping 462GB of storage. Luckily (or unluckily, depending on your POV), we had a severely underutilized Mac Mini that was being used solely as our internal Apple Service Toolkit (AST) NetBoot server and a spare 2TB external USB3 hard drive.
Operating System: Because the Mac Mini I had available was several macOS versions back and I wanted to avoid having to upgrade the OS all the way to High Sierra (this is due to the fact the currently available macOS Server app – 5.6.1 – requires 10.13.4 to run), I wasn’t able to get pip or flask installed (required for margarita) due to errors resulting from using TLSV1 when attempting to download this software.
Nginx: While I haven’t worked too much with Nginx up to this point, the performance benefits when under heavy loads are notably better than Apache.
Recently I took up the task of converting our department’s various paper forms to digital. Not only were we collecting a lot of paper over the years with our various forms, but we were duplicating our paperwork in a Google Spreadsheet manually, which led to data inconsistencies and significantly delayed returns of equipment. These forms included:
A student technology loan form
An employee technology loan form
A technology usage form for equipment dedicated to employees and departments each year
A technology work authorization form for granting permission to work on students’ and teachers’ personal technology
A data recovery acknowledgement form
I was looking to accomplish the following:
Eliminate our paper completely. We had a filing cabinet full of completed forms in differing sorting arrangements that made them difficult to find and extremely difficult to cross reference all the things different people had checked out or loaned to them.
More clearly define and smooth the process. In the case of student long-term technology loans (more than 1 month), our department requires student advisors to OK the loan before allowing students to check out equipment. Whether a student walked in on their own or their advisor sent in a Helpdesk ticket, the paper forms made the process clumsy by requiring advisors to visit our office to sign off on the loan.
Allow forms to be completed both within and outside our office. While the nature of digital forms makes them easy to share and distribute, I wanted these to be able to be completed on any device.
Store all form submissions in one place. This would make it much easier to keep track of all form submissions.
Allow forms to be resubmitted. Because we would be shifting the responsibility of completing these forms to our users, there was a chance a form could be completed incorrectly. For example, a student could indicate they were checking out a laptop but forget to include the power adapter. We needed to be able to make these changes such that it changed both in the form and the central response repository.
Have an automated way of creating a calendar event for the due date of technology loans. This would help us ensure technology was returned to us on time and serve as a reminder if it hadn’t.
Have a user-friendly form landing page. When forms were completed on one of our department devices, I wanted to have a clean and clear way for our users to access all our forms.
Ultimately I developed a solution using Google Forms and Google Spreadsheets and one which I’m pleased with. The main components are listed below:
A Google Form for each of our existing forms
A single Google Spreadsheet with each Google Form linked to a separate sheet
A script attached to the Google Spreadsheet for tracking the Edit URL of each Google Form submission (more on this in a bit)
An HTML landing page for a clean and user-friendly way of accessing the forms in our office
Alternatively, a Google Site would work perfectly well
In part 2 of “Moving Our Technology Paper Forms to Google Forms“, I covered the process for linking the forms to a single spreadsheet, organizing and protecting the spreadsheet, and creating a Google Script to collect the Edit URL for each form submission.
This post covers the process of creating our HTML landing page for our various forms and getting this on several Android devices and a public computer we had available for this purpose. I also reflect a bit on our deployment.
To address the Meltdown vulnerability, Apple released a security update for macOS High Sierra (10.13) and later for Sierra (10.12) and El Capitan (10.11). While we avoid performing major OS and other software updates during the year to avoid negatively impacting our users, we were eager to patch this widely known security hole. Up until this point we haven’t had a reason to deploy a institution-wide patch to all our managed Macs, and we actively disable automatic software checks. So, it was time to figure out a workflow.
Apple normally releases software updates through the Mac App Store and later through other sources. Normally I prefer grabbing Apple-related updates through their Support Downloads page, as this allows us to upload a single PKG to our distribution point, and then cache the PKG on our machines so that our users can then install the software at their convenience through Self Service. However, at the time Apple had not yet released the update to their support page yet, and while I’ve been eager to test and deploy reposado to keep the update deployment within our LAN, rather than out to Apple’s servers, it just wasn’t feasible given the time-frame.
While Jamf Pro supports the ability to install all available software updates via policy, I only wanted to download and install the applicable security update. I opted instead to use the Mac’s built-in softwareupdate command-line tool to cache the security update for a later install triggered by our users.
The basic structure was as follows:
Determine the softwareupdate name of the patch and the update ID of the folder that gets created in /Library/Updates
A smart group containing all the eligible machines for the security update
A script to refresh the available softwareupdatelist and download the desired security update
A script to install the cached security update
An on-demand policy with a custom trigger to download the security update
An automated policy to invoke the on-demand policy via custom trigger to download the security update behind the scenes
A Self Service policy for our users to install the security update at their convenience
An email notifying users of the update and deadline to install it, after which the update would be installed automatically
This post covers the individual components of our security update deployment solution, while a second post covers the policy build and deployment.
See below the jump for our details about our workflow.
“Did you try restarting your computer? …” is undoubtedly the most common question asked by IT to users. And perhaps unsurprisingly, the majority of user issues get resolved by completing this simple task. Before most computers had SSDs, this wasn’t a task most users wanted to do for the simple reason that it took several minutes to close all running applications, reboot, get back to the login screen, and then fully load the OS. Thankfully, most laptops now have SSDs and so this task is significantly faster – 10 to 30 seconds total at most. And yet we still struggle to get our users to reboot.
Rather than wait for the Helpdesk tickets and phone calls to come in, we’ve taken a more proactive approach to encourage our users to reboot their computers themselves.
There are 4 things that make this work:
An extension attribute to collect the last time a machine has rebooted
A smart group to collect all computers that haven’t been rebooted after the desired amount of time
A script that presents the desired notification using jamfHelper
A policy that presents the notification to the user
Whether or not you’re looking to have your users reboot on their own, this same structure can be applied to other notifications you may want to present to your users. As an example, we use a similar structure to inform our teachers and administrators when their internal storage has less than 25% free space, as well as less than 10%.
Two years ago, I invested in the Synology 1815+ (8-bay) NAS to serve as my digital media library, primary computer backup, and some of my Docker container testing. Sadly, over the winter holiday after a brief loss of heat in my apartment it died unexpectedly. RIP.
Thankfully I was able to get the unit replaced fairly quickly and seamlessly transfer my drives from my old 1815+ system to the new one without any data, setting, or configuration loss.
I also bought 2 x 6TB Western Digital Reds to replace two of my existing 3TB Reds and migrating those drives was also pretty painless.
See below the jump for an overview of this process.