At the start of These Unprecedented Times, we hosted two Q&A sessions. Our audience asked a bunch of questions about AWS accounts, and not all of them fit into neat little categories. So, here’s a collection of the misfits.

How does the AWS partner ecosystem work regarding reselling of AWS services? Do they get a discount? Do they add services on top and merge the infrastructure cost with one single bill? Are they doing something else entirely?

From where I stand, resellers make their money on fairly low margins by capturing a percentage of their customer’s AWS bill. In return for that percentage, they’re basically responsible for the day to day care and feeding of the customer: ensuring they’re happy, fielding support queries, and so on. That’d keep me up at night (either with worry or support pages), so I basically run screaming from the entire model.

When you are a reseller and have a bunch of different clients that generate a lot of cloud spend collectively, there’s a discount that comes into play. Remember, folks: No one pays retail at scale with any cloud provider. As resellers start making the pile of customers bigger and bigger, the discounts are commensurate, and they can make solid money that way.

Plus, resellers generally aren’t sitting there as uninvolved third parties. They also provide value-added services they can bill for, which vary by the reseller.

But there’s a lot of risk that comes with being a reseller. If a customer goes away, you’re exposing yourself to significant business risk; AWS will still expect you to pay even if your customers aren’t. In my opinion, the trade-off for what seems like fairly paltry returns doesn’t seem worth it.

But again, I’m not a reseller, and I don’t work with them. If I’ve gotten this wrong, please give me a shout .

We currently see a shortage of Spot instances. What’s your take on this and have you seen similar things recently?

We’re seeing an increase in demand across most Availability Zones and most instance types you’d actually want to use. But it’s not dramatic; we’re not seeing it doubling or complete unavailability in most cases. What’s tricky since the reworking of the Spot model a few years back is that we don’t see dynamic price spikes anymore–but we do see lack of inventory and terminations increasing during shortfall periods.

This is incidentally tied to one of the problems I have been vaguely concerned about—where everyone’s model assumed that spot pricing would remain a fixed low fee.

What if pricing changed? Would your business still be tenable at typical Reserved Instance prices? Because I’m not willing to bet the company on Spot pricing remaining low forever…

How long do Spot instances last?

It depends. Eric Hammond talked about running his own development instance on a Spot instance with a fourteen-month lifetime. Other times, they’ll last less than a day.

So, run some experiments and find out. Also assume that everything you see is going to be edge cases, in general, and your app has to respond gracefully to that.

I had a spot instance myself run reliably for three months–and then over a two day span it saw sixty terminations. Clearly there was some capacity issue, and that instance has been just fine ever since.

One of the recommendations I’ve seen you make is to move instances into public subnets to remove or reduce NAT Gateway charges. Any recommendations on how to convince clients that this doesn’t reduce security?

Security folks have been trying and failing to convince the internet for years that NAT is not a security measure. Okay, sure, if you say so. Don’t email me.

NAT, by itself, is not an expensive problem. It’s the Managed NAT Gateway that’s the expensive piece.

I would love to see a community-driven, very robust CloudFormation snippet that acts as a click-button-receive-output that replaces the Managed NAT Gateway. Because, let’s face it, that thing is egregiously priced. It has an endpoint charge, so it starts at $70-something bucks a month per NAT Gateway, as well as the per-gigabyte data processing fee of 4.5¢, which is on top of the per-gigabyte data transfer fees.

How do you tell a compelling story for people who own data in S3 to set up lifecycles that ultimately end with “Expiration”?

Honestly, I’ve given up on a lot of this.

If you look at this at a global level, there’s no need to delete things anymore the way there once was with the advent of Glacier Deep Archive. At $1,000/month/petabyte, it’s easier and saves everyone time to just convince people to send data to a deep archive.

Can you handle a 12-hour latency at retrieving this at one point? Great. That’s an easier sell than convincing people to delete their data.

Remember, data science is a whole industry built based on convincing people to never delete anything. Because you can’t justify the enormous cost of a data scientist when they only have a few terabytes to work on. Oh, no—they need big data—a data swamp.

Let me guess: Your pressing question might not be included in this motley collection. Well, you’re in luck. Drop me a line on Twitter and I’ll do my darndest to get back to you.