September 30, 2010

Photos from Day 2 of the VMWare View Road Show

Filed under: Behind the Scenes,Industry — TGallina @ 12:22 am

Today was Day 2 of the VMWare View Road Show hosted by our San Diego Region.  We held the event at the Beautiful Estancia Hotel in La Jolla.

VMWARE VIEW ROADSHOW sign

VMWARE VIEW ROADSHOW sign

Steve and Sarah from Trace3 welcome guests

Steve and Sarah from Trace3 welcome guests

Ben DuBois: Marketing Manager of Virtualization Solutions for NetApp

Ben DuBois: Marketing Manager of Virtualization Solutions for NetApp

September 28, 2010

Today we kick off the VMware Road Show in Newport Beach

Filed under: Behind the Scenes,Industry — TGallina @ 8:48 pm

Dave Hekimian (or as the announcer called him: the “King of the Cloud” )  took the stage again today as Trace3 kicked off the VMware View Technical Deep Dive.  Dave showed how you can “Trace your way back to the business”.

Dave Hekimian from Trace3 presents at the VM Road Show

Dave Hekimian from Trace3 presents at the VM Road Show

Dave Hekimian from Trace3 presents at the VM Road Show

Today we learned about a huge 50,000 seat deployment project with NetApp and VMWare.  Dave is discussing moving from 32 to 64 bit and how to alleviate “Storms”.  Why PC0IP protocol does well on a WAN and the ability to set shares from an IO perspective.   Over 90 attendees are here representing the best and brightest in Orange County.

John Dodge from VMWare presents at the VM Road Show

John Dodge from VMWare presents at the VM Road Show

Tomorrow We’ll be in San Diego and Thursday in L.A.

September 5, 2010

Trace3′s own Tim Abbott interviewed LIVE at F5 Summit 2010 in Chicago

Filed under: Technical — TGallina @ 5:26 pm

Trace 3’s Tim Abbott recently sat down with DevCentral Live at the F5 Summit 2010 in Chicago to discuss the accomplishments of Trace 3 this year, as well as the success of the ARX product line, including LTM and GTM. The following is a transcript of the interview.

Trace3's Tim Abbott interviewed LIVE at F5 Summit 2010 in Chicago

Trace3's Tim Abbott interviewed LIVE at F5 Summit 2010 in Chicago

DevCentral: Hi, welcome to the F5 Summit 2010, some DevCentral Live coming to you from Chicago this evening. It’s been a great event, lots of great discussions going on. We’ve got a special guest with us this evening, Tim Abbott from Trace 3. How are you?

Tim Abbott: Doing great.

DC: Glad to have you here. You’re one of our most valued partners. First of all, tell us what you do and about Trace 3, and some of the accolades you’ve earned over the last year with F5.

TA: Absolutely. Trace 3 is a systems integrator based out of California. We have five offices based out of the West. We were nominated this year as F5’s Partner of the Year, and Systems Integrator of the Year, as well as ARX Partner of the Year.

DC: Wow, so you’ve got the full gambit, that’s awesome.

TA: We’ve actually adopted all of the F5 products really well and we’ve been able to run with them very well.

DC: I love talking with people who know all the products, because some people specialize certain products. A lot of customers have one product, typically OTM.  So tell me, as a partner that spends a lot of time talking to customers, about that whole gambit. What kind of interesting trends are you seeing with users out there. What are customers trying to do? Any interesting things you see happening?

TA: There’s a lot of cool things going on in the industry right now. For us one of the benefits we see is, IT has taken a huge hit financially over the past 12-18 months, and in doing that there’s some dynamics coming out of that. There’s less staff handling the IT resources they have. And a lot of times you see IT organizations just reacting to problems. They’re just quickly reacting and trying to shoot from the hip, and jump to something and fix something that’s hemorrhaging in their environment. What we’ve found is a lot of our customers come to us with a problem, and the cool thing about knowing the F5 product line is we’re able to go in there and say, “I understand that you think the problem might be performance based, the problem really is you haven’t gotten virtualization yet, let’s talk about virtualization and how it could help you.” We could sell you product X and fix immediately, but we want to make this a long term strategic partnership and look at how we can get you to the next levels of whatever that solution might be.

DC: So a more holistic approach, because it’s like a band-aid where you can fix the problem now, but in six months you’ll have another one. That tactical investment, particularly in times like these is not the smart one. It might feel like the safe one, but it’s probably not the smartest long term.

TA: A lot of customers are finding these days, we can fix a problem immediately, but that way we’re taking a band-aid approach to their solutions. We’re not looking at the bigger picture. We’re not looking at where we want to get you two to three years from today. If they are approaching virtualization, there are a lot of moving pieces to that type of solution. Usually when we approach our customer we look at the whole thing from a multi-year project, and say, “Here’s where want to get you five years from today. How do we fit in the different solutions in the process of the five-year time period and fix your solution needs?”

DC: Are there any examples of how customers have embraced this model? To try to deal with the immediate issues, but also set them on a trajectory of more success long term?

TA: We’ve had great success with the ARX product line. It’s one of those tools where the customer says here are my top five priorities and in those priorities, you can sell them products for each of those approaches. But a better way to approach it is let’s talk about ARX. How can we tier your data across the platform better? How can we leverage you’re technology that you’re already using? And when we look at ARX you look at how people are spending their budgets. We have many accounts where a majority of their IT budget is on storage with unstructured data.

DC: Storage is cheaper, but the growth is happening faster than the price is declining.

TA: Absolutely, if you build it they’ll just fill it up.

DC: ARX is a really fascinating solution, what it does how it works. It seems like such a logical solution to provide some efficiency but also gives the customer some freedom of choice. For people who might have heard of ARX but they’re not really sure how it works or if they’re a candidate for ARX, what would you be saying is a good indicator?

TA: The analogy I like to use with ARX is that ARX is like that junk drawer you have in the kitchen, the spot where you just throw everything. The cool thing about ARX is when you approach people and you say you have all this storage, you have all these requirements for unstructured data, and you want to help them clean it up. Everybody wants to clean up that junk drawer, but they’re so scared of it, because they go in there and find 20 keys, and they have no idea what the keys belong to, they just know each one is important. We talk to customers about their data, and we ask them deep-dive questions, where is the most important data? How long do you want to keep that data? What data’s not important?  Because some of it might just be wasting space. We find a lot of accounts, where they’re spending a lot of money on back-up, because they’re not getting rid of the stuff they don’t need, and prioritizing the stuff they do need. When they clean it up, they now know I’m using my storage more effectively. When I have to buy more storage, I’m not just buying it and using it as a black hole. I’m actually buying it for a purpose that’s actually helping me. It’s a win-win.

DC: You also do a lot of work with LTM, and GTM. We find when customers use those together, they get some kind of manifold benefits. Any interesting stories about customers you see out there using virtualization, LTM and GTM? What kind of things do you see people doing with them?

TA: I think virtualization has changed the dynamics of IT. We’re finding people are starting to take the initiative and actually put a DR plan in place, and get multiple sites. They’re taking advantage of getting more cost-effective internet bandwidth. People now are a lot more flexible in their IT infrastructure. A couple years ago you were tied to one data center, and if you moved it you’re moving five hundred plus servers and there’s no way to do that without huge outages. What we’re seeing is people will turn up data centers a lot faster, and now you’re talking about multi data centers and you’re talking about, how do I balance across those data centers? You look at people with Ipads and Iphones, all these web enabled devices. If their website isn’t fast, up and reliable, they’re on to the next one. We see people deploying virtualization, that their data centers are popping up across the US. When they do that, they need an LTM device, a GTM device, they need web acceleration, and they need it in a simple form factor, and that’s where F5 really comes into play. We can get virtualization out there, we can handle server requirements, but how do we handle other aspects of that website?

DC: And that’s important because now you see that if you’re site doesn’t load up quick enough people go to the next one. There’s a short attention span. You have a four second rule that is paramount if people want to use app or web products, if they don’t get that kind of performance or faster, they’re gone in four seconds.

TA: Yeah and it’s just going to get more and more like that. It might be four seconds today but in two years it could be one second. If it doesn’t come up instantly, you’re gone.

DC: Absolutely, using all of those products together gets you combined benefits. This has been awesome, I appreciate you taking some time to sit down today. Congratulations on an awesome year.

TA: Absolutely, thanks for talking.

if you’re lazy and just want to watch the video, here’s the link…

http://www.ustream.tv/recorded/8699182

September 2, 2010

Moving to Exchange Server 2010 Service Pack 1

Filed under: Technical — admin @ 12:07 am

Microsoft recently announced that Service Pack 1 (SP1) for Exchange Server 2010 had been released to web, prompting an immediate upgrade rush for all of us Exchange professionals. Most of us maintain at least one home/personal lab environment, the better to pre-break things before setting foot on a customer site. Before you go charging out to do this for production (especially if you’re one of my customers, or don’t want to run the risk of suddenly becoming one of my customers), take a few minutes to learn about some of the current issues with SP1.

Easy Installation and Upgrade Slipstreaming

One thing that I love about Exchange service packs is that from Exchange 2007 on, they’re full installations in their own right. Ready to deploy a brand new Exchange 2010 SP1 server? Just run setup from the SP1 binaries – no more fiddling around with the original binaries, then applying your service packs. Of course, the Update Rollups now take the place of that, but there’s a mechanism to slipstream them into the installer (and here is the Exchange 2007 version of this article).

Note: If you do make use of the slipstream capabilities, remember that Update Rollups are both version-dependent (tied to the particular RTM/SP release level) and are cumulative. SP1 UR4 is not the same thing as RTM UR4! However, RTM UR4 will include RTM UR3, RTM UR2, and RTM UR1…just as SP1 UR4 will contain SP1 UR3, SP1 UR2, and SP1 UR1.

The articles I linked to say not to slipstream the Update Rollups with a service pack, and I’ve heard some confusion about what this means. It’s simple: you can use the Updates folder mechanism to slipstream the Update Rollups when you are performing a clean install. You cannot use the slipstream mechanism when you are applying a service pack to an existing Exchange installation. In the latter situation, apply the service pack, then the latest Update Rollup.

It’s too early for any Update Rollups for Exchange 2010 SP1 to exist at the time of writing, but if there were (for the sake of illustration, let’s say that SP1 UR X just came out), consider these two scenarios:

  • You have an existing Exchange 2010 RTM UR4 environment and want to upgrade directly to SP1 UR1. You would do this in two steps on each machine: run the SP1 installer, then run the latest SP1 UR X installer.
  • You now want to add a new Exchange 2010 server into your environment and want it to be at the same patch level. You could perform the installation in a single step from the SP1 binaries by making sure the latest SP1 UR X installer was in the Updates folder.

If these scenarios seem overly complicated, just remember back to the Exchange 2003 days…and before.

Third Party Applications

This might surprise you, but in all of the current Exchange 2010 projects I’m working on, I’ve not even raised the question of upgrading to SP1 yet. Why would I not do that? Simple – all of these environments have dependencies on third-party software that is not yet certified for Exchange 2010 SP1. In some cases, the software has barely just been certified for Exchange 2010 RTM! If the customer brings it up, I always encourage them to start examining SP1 in the lab, but for most production environments, supportability is a key requirement.

Make sure you’re not going to break any applications you care about before you go applying service packs! Exchange service packs always make changes – some easy to see, some harder to spot. You may need to upgrade your third-party applications, or you may simply need to make configuration changes ahead of time – but if you blindly apply service packs, you’ll find these things out the hard way. If you have a critical issue or lack of functionality that the Exchange 2010 SP1 will address, get it tested in your lab and make sure things will work.

Key applications I encourage my customers to test include:

Applications that use SMTP submission are typically pretty safe, and there are other applications that you might be okay living without if something does break. Figure out what you can live with, test them (or wait for certifications), and go from there.

Complications and Gotchas

Unfortunately, not every service pack goes smoothly. For Exchange 2010 SP1, one of the big gotchas that early adopters are giving strong feedback about is the number of hotfixes you must download and apply to Windows and the .NET Framework before applying SP1 (a variable number, depending on which base OS your Exchange 2010 server is running).

Having to install hotfixes wouldn’t be that bad if the installer told you, “Hey, click here and here and here to download and install the missing hotfixes.” Exchange has historically not done that (citing boundaries between Microsoft product groups) even though other Microsoft applications don’t seem to be quite as hobbled. However, this instance of (lack of) integration is particularly egregious because of two factors.

Factor #1: hotfix naming conventions. Back in the days of Windows 2000, you knew whether a hotfix was meant for your system, because whether you were running Workstation or Server, it was Windows 2000. Windows XP and Windows 2003 broke that naming link between desktop and server operating systems, often confusingly so once 64-bit versions of each were introduced (32-bit XP and 32-bit 2003 had their own patch versions, but 64-bit XP applied 64-bit 2003 hotfixes).

Then we got a few more twists to deal with. For example, did you know that Windows Vista and Windows Server 2008 are the same codebase under the hood? Or that Windows 7 and Windows Server 2008 R2, likewise, are BFFs? It’s true. Likewise, the logic behind the naming of Windows Server 2003 R2 and Windows Server 2008 R2 were very different; Windows Server 2003 R2 was basically Windows Server 2003 with a SP and few additional components, while Windows Server 2008 R2 has some substantially different code under the hood than Windows Server 2008 with SP. (I would guess that Windows Server 2008 R2 got the R2 moniker to capitalize on Windows 2008’s success, while Windows 7 got a new name to differentiate itself from the perceived train wreck that Vista had become, but that’s speculation on my part.)

At any rate, figuring out which hotfixes you need – and which versions of those hotfixes – is less than easy. Just remember that you’re always downloading the 64-bit patch, and that Windows 2008=Vista while Windows 2008 R2=Windows 7 and you should be fine.

Factor #2: hotfix release channels. None of these hotfixes show up under Windows Update. There’s no easy installer or tool to run that gets them for you. In fact, at least two of the hotfixes must be obtained directly from Microsoft Customer Support Services. All of these hotfixes include scary legal boilerplate about not being fully regression tested and thereby not supported unless you were directly told to install them by CSS. This has caused quite a bit of angst out in the Exchange community, enough so that various people are collecting the various hotfixes and making them available off their own websites in one easy package to download[1].

I know that these people mean well and are trying to save others from a frustrating experience, but in this case, the help offered is a bad idea. That same hotfix boilerplate means that everyone who downloads those hotfixes agree not to redistribute those hotfixes. There’s no exception for good intentions. If you think this is bogus, let me give you two things to think about:

  • You need to be able to verify that your hotfixes are legitimate and haven’t been tampered with. Do you really want to trust production mission-critical systems to hotfixes you scrounged from some random Exchange pro you only know through blog postings? Even if the pro is trustworthy, is their web site? Quite frankly, I trust Microsoft’s web security team to prevent, detect, and mitigate hotfix-affecting intrusions far more quickly and efficiently than some random Exchange professional’s web host. I’m not disparaging any of my colleagues out there, but let’s face it – we have a lot more things to stay focused on. Few of us (if any) have the time and resources the Microsoft security guys do.
  • Hotfixes in bundles grow stale. When you link to a KB article or Microsoft Download offering to get a hotfix, you’re getting the most recent version of that hotfix. Yes, hotfixes may be updated behind the scenes as issues are uncovered and testing results come in. In the case of the direct-from-CSS hotfixes, you can get them for free through a relatively simple process. As part of that process, Microsoft collects your contact info so they can alert you if any issues later come up with the hotfix that may affect you. Downloading a stale hotfix from a random bundle increases the chances of getting an old hotfix version that may cause issues in your environment, costing you a support incident. How many of these people are going to update their bundles as new hotfix versions become available? How quickly will they do it – and how will you know?

The Exchange product team has gotten an overwhelming amount of feedback on this issue, and they’ve responded on their blog. Not only do they give you a handy table rounding up links to get the hotfixes, they also collect a number of other potential gotchas and advice to learn from from before beginning your SP1 deployment. Go check it out, then start deploying SP1 in your lab.

Good luck, and have fun! SP1 includes some killer new functionality, so take a look and enjoy!

[1] If you’re about to deploy a number of servers in a short period of time, of course you should cache these downloaded hotfixes for your team’s own use. Just make sure that that you check back occasionally for updated versions of the hotfixes. The rule of thumb I’d use is about a week – if I’m hitting my own hotfix cache and it’s older than a week, it’s worth a couple of minutes to make sure it’s still current.

© 2012 Trace3 All Rights Reserved | Contact Us | Legal | Terms of Use | Privacy Policy