Tech Tips to be Confident Live

Face to Face, Engage Live, Improve List

Lockdown has moved every day work meetings and presentations to be live online. Maybe you’re getting more comfortable in front of the camera or maybe you’ve already seen a number of tips on how to do live streaming.  

If you’ve seen live translations in your video meetings, then you know there is a NEW audience for your live presentations: Ai. This is even more important if your presentations are going to be published online since Ai looks for very different things than people do.  

Often I’m asked what camera, microphone, or streaming tool I use for livestreams like this one. Since technology improves so quickly any advice I give you today will likely be outdated in a few months time. This is exactly why it’s important to understand the first principles of recreating the face to face experience, measuring engagement, and the practice of writing down learnings. 

Even if you only do meetings live and don’t plan on making any videos public, you’ll gain new insights into what is happening behind the scenes for the videos that we see online and the reasons why certain videos get recommended over others. 

“When I speak with you it feels like we are not competing” Jorge Valenzuela told me. Jorge is an EdTech coach who is a passionate speaker on the topic of equity and social emotional learning and someone that I plan to share with you live in April. 

I responded with “I see anyone who speaks about helping others through equity, creativity, or relating as an ally in our fight against Big Tech and Ai.”  The fight isn’t against other people who advocate for creativity, the fight is against Ai Proctoring or monitoring of how you work, learn, and socialize and how that will profoundly affect our society. 

Some of you in Education see this already in Universities where book publishers like McGraw Hill are pouring billions of dollars into Ai Proctoring technology that will monitor your webcam, microphone, WiFi, and bluetooth to determine if you are cheating on a online test.  This technology is real so we are at a major crossroads in the world of education. We can either choose towards maintaining the existing system of standardized tests using increasingly powerful Ai technology or we can move towards manual human focused creative learning. 

The reality is that the billion dollar investments are not in creative learning but rather in more and more powerful Ai Proctoring so if we are fighting against each other then creativity is definitely going to lose.

That’s exactly why I want to give your creative best practices an unfair advantage in the marketplace. Once you’ve moved beyond using screen time to sedate to relate, it’s time to think about creating to disrupt the status quo.

Today we’ll explore: 

  1. How to use tech to recreate the Face to Face experience
  2. How to engage in live conversations online
  3. How to continually improve your livestreams by listing improvements 

Ready to go? Let’s dive into it!

Face to Face

When you speak to another person in a noisy environment like a concert or a restaurant you are still able to have a conversation because our ears do an phenomenal job at isolating only the sounds that we want need to hear. Unfortunately, this is not the case for most microphones especially those on laptops as the effective range is only 4-6 inches. That is, you need to be at the distance of the edge of the keyboard in order to be heard well. [2]

That’s exactly why many live streamers opt for external microphones. While most laptop and cell phone mics have a omnidirectional pickup, many long shotgun mics will pick up more sound in the direction they are pointed at. Another approach that works for podcasters is to put a high quality mic close to their mouth. If yo need to move around a lot then a wireless headset mic will ensure that you’re always 4-6 inches away. This will not only improve an Ai’s translation of your voice, but it will also make it sound like you are closer to those on the livestream. Often video conferencing tools will automatically spotlight the person who is speaking the loudest so the volume of your mic is very important. 

You can also make the softer parts of your voice louder using compression. Compression automatically increases the mic volume when you are speaking softly and decreases the mic volume when you are louder. In the past you needed special equipment to do all this but today some mics like the Shure MV-88+ have compression built in.

When you have a face to face conversation you show that you are fully engaged with the other person when you look them directly in the eye. When we look away it looks like we are focused on something or someone else. As much as possible you want to be looking at the middle of the camera lens. This can be as simple as putting your video in a small window at the top of your laptop screen close to where your camera is. This forces you to look up in order to see yourself.  

There is a more advanced technique of having a camera behind a teleprompter, this way you will be looking directly into the lens when you are speaking. I have a small monitor that I use to show the people I’m speaking to so that it’s as close to having a face to face conversation as possible. 

The rods and cones of the human eye give us an incredible ability to see both the bright sky and the details of shadows from clouds at the same time. It is estimated that the human eye has a dynamic range of 24 F-stops almost double that of the best DSLR cameras on the market. In practice, most laptop web or selfie cameras have about 1/4 to 1/8 the dynamic range of our eyes. This means they have a hard time making your face easy to see when the white wall behind you is so bright. There are two ways to get around this issue. 

The first is to use a camera with more dynamic range, but this can costs thousands of dollars. The cheaper method is to be close to a window or a get a ring light close to your face or a webcam that includes a ring light. This has the added benefit of making you the brightest thing in the picture. This is good because we are wired to look at the brightest spots. That’s also why I painted the walls of my room a darker color. That said, the eye does wander so have things on the walls that you want others to look at. 

When it comes to which camera to get think of each pixel like a bucket for a holding light. The larger the sensor the larger the bucket to hold light. This is why cameras talk about the size of the sensor, the larger the sensor the larger the bucket to hold light.  

Phone cameras, Snap filters, and even Zoom uses Ai to improve image quality. This means that the speed of your computer will have an impact on the quality of your image when live.  

Which brings me to my next point.

Engage Live

When you’re in a face to face conversation you can respond immediately, you can watch a person’s facial expression to see if they are following or if they are confused. If you’ve ever tried to do karaoke in a video conference you’ll know there can be a big delay between the music and your singing. We can try to reduce this delay on our end by using multiple computers but ultimately we’re going to have to interact with the audience in a different way when we are live.

When you’re on a video conference your computer is doing a lot of things at the same time, it’s trying to improve your video image, compress the video into a smaller format, and encrypt the video so that other’s can’t easily see it. The result is that sometimes a delay in performance comes from the computer or device that we are using. Many students using older Chromebooks noticed issues when they tried to do polls during a live class. This is exactly why your slides run slowly or your computer lags when you try to play a video. 

Buying a faster laptop can help but it’s expensive and can still lag if you are doing anything processor intensive. Fortunately the gaming community has solved this problem for us using tools such as the Elgato HD60S capture card. It turns an HDMI input into a web camera on your laptop. This way you can have one laptop for your video game or presentation and another for your live streaming.

You can take this idea even further with dedicated hardware like the Blackmagic ATEM Mini Pro that acts as a web camera and allows you to switch between 4 HDMI inputs. The true potential of the ATEM Mini Pro comes when you connect a network cable so that you can send your live video directly to live streaming services such as Restream. I’ve been using this for all of my livestreams because it gives way better performance than what my laptop is able to provide. 

In a face to face conversation you’d never talk straight for an hour without giving the person that you’re talk to a chance to respond. A good rule of thumb is to engage every 2-3 minutes just to make sure that you didn’t lose anybody. 

Teachers in a classroom can watch the facial expressions of their students to know if everyone understands. So what can you do when you’re live and people don’t have their webcams on?

We have to make the implicit things like nods and confused facial expressions and make them explicit in the chat. Ask “are you getting it? Is this sinking in? Put yes in the comments. Yes?”. The key is to make it easy to respond, start with a yes or a no rather than a more complicated question.

It’s important to remember that there may be a 30 to 60 second delay between when you say something and when you get a response. Think about it, to type in a chat on a mobile device you often need to click the chat icon and then click the text bar and then type in your response and send it. So you need to be a lot more patient than when having a face to face conversation.  

I’ve found that it helps if you tell people that you will be asking for their response in advance. Say ‘In a moment I’m going to ask you to say yes in the comments, you should open up the chat now’ and then explain the question, this way there’s time to get ready. 

The face to face conversation is filled with props that serve as topics for discussion. These days there is way too much reliance on just the slides as the prompt for discussion. You would never just stare at a prop for an hour, that’s why I often like to use the ATEM mini pro to overlay my video as a circle on top of my slides. Bring up the slides only as needed and then return to the face to face conversation. People want to engage with other people and not a deck of slides.

If you’re like me and don’t feel comfortable speaking for 45 minutes without a script then there are apps like PromptSmart pro that use voice recognition Ai to automatically advance a script so that all you need to do is read the script or the main points. 

There are many different opinions around using scripts and I think it depends on how you use it. If you’re going to read it monotone then it’s better just to have key points. All the best keynote speakers I know have scripts. They’re written in the voice of the speaker and that allows the speaker to focus on their tonality and expression rather than worrying about presenting the content in the right sequence. As a presenter with ADHD having a script has been lifesaver for me because it allows me to remove technical distractions and focus on the expression and think deeply about your comments and questions. It helps me to remember to engage you every few minutes, it reminds me exactly what screen transitions I need to do live, and it allows me to have consistent quality experience from presentation to presentation. The script also helps me turn my talks into a blogpost or even in a future chapter of the Ai Parenting book that I’m working on.

Most importantly it allows me to modify my scripts so that my improvement list is always included. 

Improvement List

Those that have been watching my livestreams for some time have probably seen a ton of mistakes that I’ve made. One time I accidentally muted the audio, last week you may have heard a delay or echo during the livestream, during an interview I’ve had the video lag because there wasn’t enough bandwidth. I’ve even lost power in the middle of a livestream, and one time the streaming service would not start even after I reset my computer.

I want you to know that it was my most embarrassing moments, the times when I questioned if I was really suited for doing live presentations, the times when I was most humbled by technology that I learned the most. If we too afraid of fail then we will never try and we will never learn. 

When our live video is not working it’s easy to blame technology “I have an old computer, I hate Google Meet” or we can blame our own tech competence “I’m not a techie” but that won’t help us improve. The only thing that helps is asking the question why did the video not work? Was it because another program was using the video. Does restarting the computer solve the problem?

We need to build a habit of celebrating failures because we are really celebrating learning. After one really bad live stream I decided to finally write down all of the mistakes that I had made into an improvement list. 

My goal was to write down the cause of the mistake and the way that I would resolve this issue in the future. I would reward myself every time I came up with a way to avoid those mistakes in the future. For some a reward is chocolate, for me it’s buying tech gadgets. At times the solution required that I buy tech equipment to help me solve this issue or pay for a subscription to a software tool that would save time.

I’m not perfect by any stretch of the imagination but I’m always learning and improving. The tech tips that I’m sharing with you today are only scratching the surface of what I’ve learned over time. It would take days or maybe even weeks to share with you all of the tips and tricks that I’ve learned over time.  

That’s exactly why I’ve put together my lessons learned into the be confident live masterclass. It’s a course that goes into the details of being confident live from the equipment that will best serve your goals to strategies for engaging a live audience, to reusing video replays.  

Every 45 minute to one hour livestream that I do is turned into three 60 second video previews, three 10 minute videos, one podcast, and one blogpost. If you want to know how this is done reach out to me and I’m planning on creating a circle for creators and I’d like to know what specific things you’d like me to cover in a course.

https://youtu.be/mWWIIz1lcVM

References

  1. Laptop mic pickup range is 4-6 inches https://www.quora.com/How-can-I-increase-microphone-range-in-laptop
  2. Dynamic Range of Human Eye https://photographylife.com/maximizing-dynamic-range

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart