Derek Featherstone Expands his Field of Vision with Dead & Company at Sphere

Photo credit: Chloe Weir
When Dead & Company took the stage on May 16 to kick off their residency at Sphere, Derek Featherstone was in his familiar perch as front-of-house engineer for the group—a role he has occupied ever since the band debuted in 2015. Featherstone has also served as tour director for the six-piece, but in Las Vegas he assumed new responsibilities as show producer.
The longtime industry professional is well-equipped to multitask. In 2018, Featherstone became the CEO of UltraSound— the California-based sound company that had employed him for the prior 30 years. “When I’ve been putting together Dead & Company tours, I’ve been doing that alongside navigating a business,” he notes. “I’ve got a lot of great operations guys at UltraSound that keep things running, but I’m still there in one way or another every single day. It’s something we’re all used to, in that you can’t just turn one off and the other on.”
As for his post at Dead Forever, he explains, “It’s interesting to drop into the different roles as the producer of the show. It requires working on all these elements, from content creation to budgets to every bit of it, and then closer to the event, I started shifting my thought process over to doing sound at the same time. So in the heyday of it, I was doing the producing, engineering the show live and navigating the business at the same time, which I’ll admit can be a little challenging on the brain.”
Since Sphere is a new venue with cutting-edge technology, one might think the whole experience would be turnkey on the production side. However, that’s not necessarily the case.
It is probably a turnkey operation for a preset event that doesn’t include a live band. There’s an nonexistent format for video, so right off the bat you think about how to create video with the scaling that works in the Sphere—essentially it’s a 16K by 16K video component. You can’t just walk in there with home movies. You have to create everything and then map it to different servers. You basically cut it up and map it, so it reassembles in that sort of surround video format. The video element is time-consuming. We had five months to put this show together, which isn’t a lot of time necessarily.
Then the audio side of it is super state of the art, for sure, but the sound system’s behind the band, so you go back to the Wall of Sound days where you’re faced with all these challenges. That’s not because it’s a bad deployment or bad design, it’s because of physics. You have to make the sound work, essentially playing in an upside down cappuccino cup. I would say that they did an incredible job with the deployment of that system considering what they’re up against, which is essentially threefold—you’re playing in a glass room, it’s a circle and the PA is behind the band.
Just as a simple example in terms of what we had to do, I went through five different vocal mics with these guys. I’ve used the same vocal mics on this particular band every show for the past 10 years, and I had to swap out to five different ones to try to get the most volume before feedback because, essentially, a guy stands on a vocal mic pad, the mic turns on and the entire sound system goes into the microphone.
So building your mix around the volume and being able to get the vocals loud enough is the key part of this room, and you have to adapt. You can’t go in there and say, “I’ve always done it this way and I’m going to do it like this.”
You saw U2’s performance at Sphere. How did that help you envision what Dead & Company could do?
We brought all the principals of Dead & Company to see U2. So I went there four times with the guys, which enabled me to get a feeling of the canvas. U2 went to a very specific structure, using a lot of the same songs and same content. They built it in the way you would build a Broadway show, where there isn’t much improvising.
Right away I thought, “OK, well what’s Dead & Company going to do?” Then over the course of two or three months, with some programmers at UltraSound, I started working on a database that has all 10 years of history of every song they play and the historical averages of how long the songs are. That meant we’d have at our fingertips a way to move the show around because we would be operating in the time domain, since the content is timed. When you create a piece of art content, you do have loops, so you can make some a little longer or a little shorter, but that’s all stuff you have to do in advance. That means you really have to know exactly how long the songs are, which we don’t pay much attention to typically.
So right off the bat at U2, I thought we needed to start organizing the data so we could make a show work where we can play 120 different songs, use different content and time it out so it doesn’t end up where you’re in the middle of some song and you run out of video.
It became a collaborative puzzle, creating a setlist based on songs, timing, tradition, content and keeping the flow of the show interesting. John Mayer and Bobby Weir had their own input as the singers, using all the data we could find.
What was your role relative to creating the video?
John was the creative director, Sam Pattinson from Treatment Studios was the head of content creation and then I was the producer or director of the show. So between the three of us, we started working on storyboards from ideas out of John’s head about the roadmap of the show. Bobby and Mickey Hart were executive producers, so after that was drawn up, we got with Bob and Mickey and said, “Hey, what do you think about X, Y and Z?” After that, every three weeks we went back and we all reviewed where we were at with everyone throwing in their thoughts.
My role as a producer involved admin, timelines, budget, focus and really whatever was needed to push the ball down the road. John stayed very involved on the visual creative side and passed all these ideas off to Sam and Treatment with their giant team of animators to build the content.
Treatment is the company in London that had done all of the U2 show and had done all of John Mayer’s tours for about the past 12 years. I think John first saw Sam’s work at a Rolling Stones show like 14 years ago. So the John Mayer tour has been built using Treatment Studios and Sam. Then, maybe four or five years ago, I started using Sam for Dead & Company’s visuals when we were doing our stadiums and sheds.
So it was a working relationship we all had in place and it was a very enjoyable experience. We put a lot of time and energy into it, but Sam has an incredible team at Treatment. Without them, I don’t know what we would’ve done.
I’ve read that with U2, the sound mix had something of a mono feel to it. How did you approach it?
It’s like a mono system, but there are 44 channels that an engineer can approach however they want. If you were to picture what it looks like behind the band, maybe the width of the stage there are five main speaker arrays and you distribute the majority of the band in those zones. So you could say, “I want to put the vocals on the sides and I want to build the stereo imaging,” but the farther to the sides you go and the wider it goes, the less coherent it becomes.
U2 had a really solid, coherent sound that was very friendly with vocals and guitars in particular. You have to experiment and put sound different places in the room and see how far you can stereo-fy it before you destroy it. Everyone’s going to approach it differently.
With Dead & Company, there are six different vocal mics. So to keep the coherency of vocals, you really can’t push them around the room too much. What you can do is put the drums a little wider and you can put the softer instruments, like a Hammond or a Rhodes, even wider than that because those work with distance. If you take something like a snare drum and you put it 60 feet off center on the sound system, it hits those people that it’s closest to real quick and then it hits the people that it’s farther away from later, so all you hear are echoes. You have to put the percussive attack things in the middle and then softer sounds can go to the sides. It’s a different approach and you just have to try it and see what works and what doesn’t work. You can’t decide in advance you want to do it a certain way.
How long were you in that room before opening night and how much of what you did was on the fly?
We built a demo of the Sphere system up in UltraSound’s warehouses. Phish did the same thing at Lititz and U2 had done the same thing in Europe. It replicates all the channels of audio, putting speakers in front of you, above you and on the sides of you. Then you start experimenting with mixing in that environment so you can see what works and what doesn’t work.
For instance, if you were in the demo room, which is 40 feet by 40 feet by 20 feet tall, you can press a button and the processors shift all the delay times and essentially move you to a different seat. So I’m building these mixes that might sound really good in the mix position at the center of the room, but then I might press the processor switch which takes me to section 201 and everything I did might sound like crap in section 201, so I’d have to redo it all.
You have to really experiment with what works in the room so you can get the most even sound throughout all the seats. Then, when we walked into the building, it turned out that was a pretty good representation.
When we got in the building, we had two days of time in that room. We had as many nights as we wanted—they have a movie in the daytime—and right after Phish left we could work from midnight to 10 a.m., which is when we did a lot of content work.
Then we got the band up there and did one six-hour rehearsal and one four-hour rehearsal, before we did the first show. I think we got as much as we could out of the demo system and then those two days in the building prior to the first show.
Are the musicians closer to one another onstage or did it just appear that way to me due to the scale?
We removed all the junk from the stage, so I think it looks smaller. They’re in the exact same positions, within a foot or so of a typical Dead & Company setup. However, we made the stage as small as we could because there are no tents and there are no guests onstage. We didn’t need all that real estate, so we shrunk down the stage itself, but the band is still taking up the same footprint.
Then the drums are a little bit off center, so there are better sightlines. The drums in Dead & Company have always been off center by three feet, and now they’re four feet off center. We moved the whole drum set-up one foot further, so that when you walk around Jeff Chimenti’s side of the floor, you see the drums sooner. The band members are all wearing in-ear monitors. Is that required to play in the room? I’d say they’re required because the timing is so strange in the building with the sound system behind you that if you take your in-ears out and you try to sing, you’ll be out of time. Essentially you have these echoes that will exist because the PA is 60 feet behind you and 50 feet up in the air. You’ll hear your own voice through floor monitors if you’re using them—which you can’t really use efficiently there—then you’ll hear your voice again as you’re singing after the fact. It would be challenging to navigate that chaos. So we have no working floor monitors on the stage.The band members are all using in-ears, and they had to get used to those because that was a new playing field for some of them.
From your perspective, what was that learning curve like for someone like Bob Weir?
Bob Weir is a good example. He loves to sing and listen to the sound system, having grown up in the time he did. When he started, there weren’t floor monitors, there weren’t in-ears. So he got used to hearing the PA and he knows how to sing off it. So for him, it was probably pretty tricky to adjust, but fortunately, three months into the project, when I explained to all these guys how it worked and what would happen, they all went in willingly.
You really don’t have a choice when you’re hearing your voice a second time behind you. When you sing, you’ve got two things. You’ve got the in-ear monitors, which amplify your voice right in your head, but then you have bone conduction, which is another time domain. So your body cavity resonates when you’re singing. That’s zero time. Also, sometimes when you have in-ears, there are a couple milliseconds of delay because of digital conversions, which can mess people up. Then you get into the sound system being anywhere from 50 to 200 milliseconds later, so it would be too hard to sing in that environment.
Does the fact that they have in-ears help you in any way?
There’s less noise onstage, so it certainly helps in the cleanliness of the audio. It allows us to have a lower overall volume, but more clarity. In the fundamentals of mixing live sound, someone like myself has to get the sound system over the noise onstage. So you’ve got to turn up a certain amount so that the patrons are hearing the sound system versus the stage. If the stage is too loud, you have to turn the PA up to get over the stage volume. With in-ears, you’re doing less defensive mixing and you can toss in different reverbs and other sounds that add to what they’re doing.
Is there any low-level hum or audio interference generated by the visuals?
The only interference we have in that room is all the Wi-Fi so that fans can be online. The Wi-Fi cell extenders are the only things that kind of introduce noise into some of our vintage gear— a Fender Rhodes might start squeaking because it’s got some interference.
But as for the video itself, what we’re doing on that screen doesn’t impact the audio in any noticeable way. It does impact it in the sense that all the speakers are behind the video wall, so you’re blowing speakers through glass panels, which is kind of an interesting dynamic. Some percentage of audio is being stopped but it’s not that much. The higher frequency stuff gets blocked faster though, so you might do things differently with EQ that you wouldn’t normally do to boost the high frequency and push it through the screen. You just have to listen to what’s coming out the front—it changes your approach a little bit.
In terms of the tools at your disposal, can you talk a little bit about using the haptics?
It’s definitely a playground for wayward sound guys. You can do special effects and certain sounds that go around the room. We built all the programs into our console, so at any minute in time, I can push a guitar into a program and it runs around the room in a big circle. You can do a lot of that stuff, but generally speaking, if you do too much of it, it’ll confuse people.
With Dead & Company, during “Drums” and “Space,” you can do anything you want, though. So we have drums coming down from different zones in the roof instead of being in front of you. For “Drums” and “Space,” a lot of the individual drums come from the roof or the sides or the back, so you can really create an immersive environment.
You could do something pretty crazy for EDM stuff in there. You could utilize that room pretty well to trip people out.
It’s a natural place for Dead & Company to play in because of all the visuals and because of the sonic chaos you can create, but you’re always keeping in mind that you don’t want some haptic to upset someone when they are having a good time. You don’t want it to be that in the middle of the guitar solo, the guitar goes away or goes to the other side of the room. That stuff will get you in trouble because people don’t like that.
With the haptics built into the seats, you don’t want the seats shaking every time the kick drum is hit. You also think practically because people are probably standing up during the first song, so don’t waste your time using the seats then. When you get later in the show, people do sit down at times, but you have to be careful because it can be fatiguing. If you have a kick and a snare drum in the seats the whole show, it takes your center of focus off the PA and the band, which also becomes distracting.
There’s a visual piece that begins at Barton Hall and then the gym opens up. During that sequence, a friend of mine believes that you change the audio to evoke the experience of being in a more confined space and then moving to a more open environment. Is that true?
That unfortunately is not true. It’s a cool idea, though.
The one thing that we are trying to do, and may still try to do, is a distributed sound. So when you have the visual up for the Wall of Sound, I have the mapping of what was in every speaker of that system. I have all the drawings for it. One thought we had was to try to remap all the sounds to go to roughly the areas where those speakers are in that video content.
We haven’t really had the time to do it, and I’m not certain it’s going to work because the key thing in that room that we’re trying to do is keep the vocals strong—the hardest thing to mix in the Sphere is vocals. I don’t know if we’ll get there, but it’s crossed our minds, for sure.
Do you think that the average concertgoer would notice the change?
I think you’d hear something different. You wouldn’t have a reference unless you caught the Wall of Sound. Since you’re playing to a whole crowd that most likely never heard that PA, they wouldn’t immediately think it sounded like that. It would be something cool to talk about, something cool to say you did, but frankly, I don’t think people would notice it, or they might even say, “God, what changed? It sounds terrible.” [Laughs.]
You were pretty far along with what you were doing by the time Phish played Sphere. Did you see them and did their show bring something to your attention?
We were pretty far along. I’ve known those guys for a long time and it was good to see what they came up with because they used a different content team with completely different ideas. I didn’t want to come in and have someone say, “Oh, you copied Phish,” or “You copied U2,” without even knowing it—although, some of that doesn’t really matter.
But the key difference between Dead & Company and Phish is that we use way more IMAG reinforcement of the band members playing. Phish chose not to, which is their option, but we use a lot more of the band on the video wall than Phish chose to do. So it was helpful for us that they didn’t do a ton of that, so there’s some difference from show to show.
The way you frame the IMAG is an interesting aspect. Not only that, but the use of it on the first song offers a cool juxtaposition with what happens next.
I think some of that came from the idea that we didn’t know if people would get to the show on time, so we didn’t want one of the key moments of the show to happen in the first seven minutes. That first song is kind of an intro, so if someone’s stuck in the hallway or they’re at the bar or in the bathroom, the show is built in a way where we basically are saying, “OK, this is your time to get in your seat and come experience the event.”
In a worst-case scenario, the visuals could feel like an uncomfortable or distracting version of an amusement park ride. How did that factor into your approach?
I think it’s important to keep it balanced. Even people who enjoy riding roller coasters over and over can feel beaten up by it.
John is really good at the creative side—keeping the balance between the music, the performance and the content. He is careful to make sure one doesn’t outweigh the other. So you throw in these high-value content pieces and then you go to IMAG for a little while and then you throw in another piece.
That balancing of the show is pretty important. If every single song had some crazy roller-coaster ride, there’s something to video fatigue and content fatigue—people would get tired. So pacing the show is pretty important with the use of content.
We also did a lot of nausea testing to make sure stuff isn’t too crazy for people on the front side of it.
Is there a visual that you really enjoy and you’ll make some effort to experience even while you’re working?
The Wall of Sound is always fun, just because of my background with audio. That Rainbow Road thing, which the Wall of Sound morphs into, is also quite interesting. I like the venues, too—the way it kind of steps from building into building. Those are places we’ve been to and it’s kind of a fun movement. The pieces that I’m intrigued by are the ones that have the big movements. They’re fun to watch.
In mixing the band, you’re always responding to the moment. But beyond that—we’re six weeks in at this point—to what extent are you still discovering the room, if at all?
We are working on the fly, that’s for certain. We’re following the band musically and if they go somewhere, we go somewhere.
Then with regard to the room, up until week four and five, I was experimenting with putting vocals in different places and with these other different things. So we haven’t given up, we’re not finished yet, but we’re in a good, stable spot.
Each week I might have an idea and we’ll change something up—“Hey, let’s try this today.” But when you’re mixing, they may go into left field and you have to follow them. You may have someone soloing and they didn’t get into the right volume, so we chase that and bring it up to make them heard. Then if people are soloing on top of each other for fun, you shift them out so you can hear both people. You’re constantly listening.
I don’t know how well I would do in an environment of a preorchestrated show, where it’s the same every night. I might go crazy. [Laughs.]