Wednesday, 23 December 2015

Thoughts on Final Cut Pro X



The backstory.
The core actions of non-linear editing haven’t changed much in 15 years.
Insert. Trim. Lift. Overwrite.
We’ve enjoyed incremental software improvements year to year, but rarely a giant leap forward.  If anything. the trend was steadliy towards ever more features, at lower prices.  Avid, Final Cut Pro and Adobe Premiere had far more interface similarities than differences, and they were eventually all priced within reach of anyone serious about editing for a living.
As recently as the Spring of 2010, I was asked to edit a primetime television show on an Avid Meridian editing system from the 90’s.  It was more of an ‘edit box’ that a full-service modern computer, but it worked perfectly.   As television editors, we just make choices about picture, music and story.  When the choices are complete, our assistant editors make EDLs for online and coloring and export OMFs for the audio mix.   We suffer no consequences for using ten-year-old technology.
But editing the ‘television way’, with teams of assistants and days lounging at the audio mix with a steady stream of fresh baked cookies, is a niche workflow and, in my opinion, an ever decreasing method for creating great content.  Today, most editors work independently and need to perform their own ingest and media management as well as titling, motion graphics and even coloring.
Since you are reading this on Philip Bloom’s blog, you are likely one of these media ‘master of all trades.’  Shooter, editor, mixer, colorist.   You also likely know that last summer there was an earthquake in the non-linear editing world.  What many had expected to be a long overdue update to Final Cut Pro, ended up being a full-scale application rewrite with a completely new interface –  and many missing features.   The feedback from editors was overwhelmingly negative.  We editors aren’t exactly known for our sunny dispositions to start with but with FCP X, Apple unleashed a hornet’s nest of professional users who understandably felt confused and betrayed.
Working in film and television for 20 years I’d personally seen editors unhappy with change many times before:
In 1995, I knew a flat-bed film editor who was resistant to non-linear editing and said ‘if I wanted to work on a computer, I’d be an accountant!’  In the years that followed, he worked less and less.
When Final Cut Pro came along, I trained many seasoned Avid editors how to use the new software  More than a few said  said it was ‘not suitable for professional use.’  But in 2001, my first primetime television editing job was on Final Cut and, as recently as 2011, I edited Project Runway on FCP.
When FCP X came along.  The grumpy and resistant editor was… me.  While the new approach and interface intrigued me, I felt totally disoriented and was unable to perform even the most basic editing tasks.  Mind you, I beta-tested version 1.0 of Final Cut in 1999 and loved it.  Since then, I have spent more than 20,000 hours editing, mostly on Avid, but with healthy doses of FCP and Premiere as well.
In the 90’s, I was an aspiring editor, but now I am a grizzled veteran. So, with version X, I said “why should I bother to learn yet another interface? Where’s my FCP 8? What can X do that I can’t do now? What’s in it for me?.”
However, having witnessed the waves of resistance to change in the past, thinking ‘I may be missing the next big thing’, I’ve forced myself to work through the discomfort and learn Final Cut Pro X.  Now, I wanted to share my impressions, so far, with you.
Trouble in the timeline.
We have seen an escalating arms race of buttons and widgets and dials at the head of our timelines.   In an effort to allow us to mute and solo and patch our tracks and view waveforms, all at a single click, Avid went off the deep end in version 5.5   Check out the interface widgets at the head of the AVID timeline in this review.    On a laptop screen, these controls can obscure a significant portion of timeline real estate.  Here’s Avid 6 compared to FCP X:

FCP X has gone to the opposite extreme.  No widgets, no tracks, and the timeline just emerges from the dark, left edge of the interface.   This is a monumental change.  Those widgets are there for a functional reason, so removing them means you must weave that functionality throughout the timeline interface.  In my experience, Apple has pulled this part off.  Patching tracks does not apply any more, and the audio adjustments in the timeline are, if anything, an improvement.
But those widgets are also there to identify tracks.  Which leads to the fact that… there are no ‘tracks’ in FCP  X!  In my opinion, this is the single biggest shift non-linear editing since its inception.
At first, editing without conventional timeline tracks felt unnerving.   I did not like it.  Like driving on a busy highway with no lane markers.   As my first FCP X deadline approached, it was as if that unmarked highway was now rain-soaked and lit only by moonlight.  It induced waves of panic.  I typically organize my audio tracks by content.  Natrual sound on tracks 1 and 2.  Interviews on 3 and 4.  Music and FX on 5 through 8.  When working with other editors, it makes it easier to collaborate with these standard track assignments.
In FCP X, that’s gone.  No track numbers.  Just metadata.  There are automated solutions for identifying music and dialogue, but the tracks are gone.  Instead, there are storylines and clips that tack on to storylines.   It’s a departure from every major editing platform currently available.  It’s taken months to feel familiar, but the panic is now gone.
It’s very hard to break sync with the default settings in FCP X.  But, as much as Apple is trying to prevent us from losing sync, my editing habits are formed around breaking sync constantly,  and then repairing it.  That’s how I edit.  Broken sync indicators often help me track my edits in progress.

After months of editing with FCP X, I am still aware of the ‘missing tracks’ but now, instead of driving a car, (bear with me, I am jumping metaphors)  it feels more like skiing or surfing.  There’s a freedom, a flexibility.  Now, ‘tracks’ seem out of place.
If you think this sounds like flowery hyperbole, you may be right.  It’s hard to think of ways to convey feelings about software.  So, I am merely trying to express the small sense of elation that I felt when I realized the upside of losing my trusted track framework.  I felt encouraged to experiment more.  I felt slightly liberated in my timeline edits.   For someone who grinds away in editing interfaces day after day, year after year, for more than a decade,  this was a notable change.  There is something new here; a different way to edit.
Where’s my source window?

Another source of uneasy tension was the removal of the traditional ‘source’ widow from the interface.  Three point editing has been synonymous with the professional non-linear interface since day one.  With FCP X, it’s still there, just not as obvious.  As it turns out, the default clip view is my least favorite.   When I switched to List View for Event browsing,  I found that I actually liked it more than any source window I have used.  I can easily skim the footage and scan the waveforms for audio indicators.  This is a specific feature I miss when I edit on Avid daily.   A huge improvement, in my opinion.
The unfortunate Event.
You may have seen me refer to ‘Event browsing’.  This is the new alternative to bins, folders and source clip viewing.  It’s also tied to how Final Cut X organizes and  interacts with media on your hard drive.  By default, FCP X wants you to ‘wrap’ all of your project’s media in Events.  Basically the iMovie and iPhoto way of managing media.  The objective seems to be to ingest media into the program and eliminate the obvious file structure that relates to the original clip.  If that’s confusing, let’s call it baby-proofing your media.  I don’t like it.   One thing I preferred about the previous versions of FCP was that I could import a clip and it would have a direct relationship to a file on my computer that I could break and relink at will.  Avid generally wants to organize your media and have you interface with it through a ‘Media Tool’.  FCP X takes this to a whole new level.  Until their 10.0.3 patch of this week, there wasn’t a way to relink media that had lost it’s way.

Whatever the intention, this is the feature I dislike the most about the new Final Cut.  I’d like to put files on any drive and move them at will – then, relink them.  This ‘baby-proofing’ of media  seems so out of place in a professional program.   I understand the upside of keeping files in a project linked and organized,  but for me the tradeoff in flexibility is not at all worth it.
Plus, even with the new relinking option in the  10.0.3 update, you get the dreaded yellow exclamation point with no direct way to fix it.   For me, something seems off in an Apple interface when you can’t right-click an alert and have an option to take action.  To relink the file above, I have to go to the File menu and choose ‘Relink Event Files.’  When I found that, it felt like Apple saying “Ok! We will let you relink clips, but  we don’t like it… and we’re still calling them Events!”  I can’t click on that offline file to fix it.  I don’t get it.

The smallest Aha! moment.

I’ve had a roller coaster of emotions with FCP X.   Yes, when you are editing, creating, screening, and outputting, it’s an emotional experience.  There were so many times I was cursing this software.  But there were many small moments where I was loving FCP X as well.   One small Aha! moment happened recently, when I was using the ‘precision editing tool’. – aka ‘trim mode’.   I was tweaking the A and B sides of an edit.  At the same time, it was impacting the titles and graphics on the layers above.  In FCP X, you can access all of timeline clips while in ‘trim mode’.  In the screen grab above, the red indicator on the top left is me adjusting that clip while in trim mode.   I know this is obscure, but it surprised me.  When it happened, I actually launched my Avid software as well as FCP 6 to make sure I wasn’t crazy.  Sure enough, when I tried to adjust a title while in trim mode, both platforms exited that mode.
I’ll be the first to admit that this is the smallest, most insignificant feature, but it was one of the moments that made me stop and look at this new software.  In fourteen years of editing, this was something I had never done.  Something that felt off-limits and too demanding for the software.  I wasn’t in trim mode.  There was no mode.   Modes are part of my workflow.  Every day I work in edit mode, trim mode, overwrite mode (Avid) and effect mode.  If the barrier between modes are gone, and I am free to adjust effects while I am trimming,  then perhaps it’s time for me to rethink how I am editing.   And that – rethinking how I edit – is the Aha! moment.  Suddenly, this new, dynamic timeline made my Avid timeline tracks feel like layers of sediment that buried my clips like fossils (sorry for yet another metaphor).   When you get over the frustration, there is something fast, fluid and flexible in FCP X that I haven’t experienced before.
Again, I know this is flowery stuff.  Not a compelling argument if you own a post production house with 40 editing stations sharing many terabytes of storage.  But for a veteran editor, who has spent tens of thousands of hours editing, it was sort of exciting.  Without tracks, without rigid  modes, there is something that emerges that is a more freestyle, intuitive, instinctual experience.  And I liked that.
With any tool you grow to like; your favorite camera, or trusted software,  it comes down to little moments.  Small interactions that add up to create a feeling of comfort and familiarity.  You can compare features online, but your feeling about any software will emerge from a thousand small interactions.  Like modifying a title while in trim mode.
I encourage you to download the demo of FCP X.  Try it.  Get frustrated.  Work through the frustration.  Get angry.  Be surprised.  Keep going.   In the end, it may not be a good solution for you.  But, if you spend some time with it, you are bound to find some small, new way to to approach editing.  And you may find it exciting, like I did.
When editing first became non-linear, it afforded the luxury of experimentation without consequences.  New styles of editing emerged.  For me, this version of FCP X feels like a peek into the next wave of editing possibilities.  Gone are tracks and timecode. Now, there’s metadata and storylines.  You can mix frame rates and formats without dire consequences.  This feels like the beginning of a leap forward.
But don’t expect to figure it out intuitively.  It’s a different paradigm.  I don’t read manuals but thankfully, there are great training resources available at Lynda.com or from Ripple or Larry Jordan.  I would not have been able to progress with FCP X without those resources.
This is very deep software.  It’s really hard to cover much in a guest blog post.  In short (I suppose it’s too late for that), I will say there are many things to love, and many things to revile in the new Final Cut.  What makes it worth trying (and working through your frustration), in my opinion, is the prospect of seeing a new way to approach to the way you cut.
The way forward.
You can’t reasonably make a choice about a professional software platform without considering the path forward.  Whether you are a designer, audio engineer, or a film editor, you need to have faith that you are on a viable platform for the future.  You’re bound to invest money in plug-ins, hardware and countless hours of workflow knowledge.
So is FCP X the ‘next big thing’? Are you ‘missing out’ if you don’t get on board.  That’s where things get very murky. Whereas the editing software platforms were on parallel courses before, it seems like the introduction of FCP X establishes a fork in the road.   Let’s look at if from the perspective of three types of editors:
The independent creator: Master of all trades.
Adobe’s Production Bundle just got a lot more appealing for editors who utilized wide ranges of the former FCP Studio.  Round trips to the very powerful After Effects and Media Encoder can take the place of Motion and Compressor.  If I was starting today, building my first editing station, I would likely opt for Adobe’s offerings.  (Interestingly, my first editing platform was Premiere 5.1).  Also, Adobe software is cross platform – meaning that you can build a lower cost PC tower with all of the hard drives and tricked out video cards you want.  Apple has not exactly inspired confidence lately with it’s Mac Pro line, and if you bought a new tower today, you’d be getting a pricey machine that was first released in 2010.   However, if you only dabbled in Motion, Color and Soundtrack Pro, there may be enough in FCP X to meet your needs and you can run it on an iMac or MacBook Pro, like I do.  And, get this, the Apple option for FCP, Motion and Compressor is the much less expensive option.  The full version of Premiere, on it’s own, is $800.  If you want the bundle including After Effects and Media Encoder, double that.  Final Cut X, with Motion and Compressor is $400. At this point, I know I won’t use the features in the Adobe bundle that make it four times the price, but you might.
Most troubling for an independent or aspiring editor is Apple’s commitment to the professional platform.  When I started editing, this ‘Pro’ page from Apple had amazing stories about people making cool things with Apple pro software.   The page has not been updated since 2009.  That’s not good.  If you are committing thousands of dollars into hardware and software, and thousands of hours of your life into a platform, you don’t want it to be neglected or abandoned.  I’ve never owned a PC in my life and the lack of a Pro roadmap from Apple is troubling for me.  Options to switch computing platforms,  at this point, are good and by moving forward with FCP X, I am definitely limiting my options.
The professional television and film editor.
With FCP X, it would appear that we are headed for an Avid monopoly of Hollywood editing again.  FCP had made some inroads into production companies and commercial trailer houses, but it was still an uphill battle.  I know several editors who refused to work on certain shows because they didn’t want to work with Final Cut.  This new version of FCP X is likely to make their heads explode.  It’s just too different.  We also get back to the argument of ‘what’s in it for me?’.  For studios and production houses, there is little reason to resist the gravitational pull from Avid.  In fact, the most notable show I cut on FCP, Project Runway, is now headed back to an Avid workflow as a result of their disappointment with FCP X.  For production houses, the Avid user base is there, and it’s the safe choice.
For me..
When I show up to work tomorrow, I will be sitting in front of an Avid.  But for my independent projects, documentaries,  DSLR shooting, basically, for everything I am working on in the future, I am sticking with FCP X…for now.  It’s fast, flexible and worth the time I am investing.  It has all the features I used in Color.  I can do light mixing and FX and export using Compressor. Also, outside of television, I don’t intend to ever shoot or deliver on tape again.   I suits my specific needs, just as an ‘edit box’ from the 90’s still works for some TV shows.
I do reserve the right to complain about FCP X, or give up on it in the future.   For now, ‘what’s in it for me’ is a renewed excitement in the fundamentals of editing and an opportunity to rethink the way I cut.

Tuesday, 22 December 2015

Microsoft Redstone Is a Code Name of Windows 10 OS Update Or Windows 11

As we all have heard the announcement made by the Microsoft officials regarding the end of windows after Windows 10. It is officially said that Windows 10 will be the last Windows from Microsoft, but they will keep on updating the Windows 10 by providing more features to their users.
Microsoft Redstone Is a Code Name of Windows 10 OS Update Or Windows 11
With this new approach in mind, Microsoft has decided to launch its updation (Codename: Threshold) in this summer. Besides this, they have also some big special plan to launch another update ‘codename: Redstone’ in 2016.
After launching the Threshold Company will swiftly move to the next updation work. Their plan won’t end with the Threshold launch. As they have strategy for all the products’ launch, they have planned sequence of updations for the next few years that will keep users busy.
Next year updation is on the calendar, this news is revealed by the Neowin news where they said company is planning to bring a remarkable update for the window 10, but how much affect it would have on Windows 10 is not yet clear. But as the name they have given to the next year update, it seems it would have great impact on the Windows 10 and this will be a major update from the Microsoft side. Many people are wondering about the name they have given i.e. Redstone. What does that mean? It is basically a very popular item in Minecraft, where it is utilized to build new technologies to improve the product. In September 2014, Microsoft has acquired Minecraft-maker Mojang for $ 2.5 billion.
Many sources have mentioned several Microsoft’s plan for this year updation. It is said that Microsoft’s this year update (Codename: Threshold) is a minor update, just to brush up the users for coming major updates, and this minor update will also help users to understand the method of updation Microsoft has adopted recently and meanwhile they will get accustomed to the updations without any complications and confusion.
As the Microsoft is launching update this year in summer, it is expected they will launch next year’s updations too during summer time. Microsoft will adopt the updation time frame of June to October, and they will maintain this schedule for further new updations for Windows 10.
As many rumors have started catching the attention of users stating that the Redstone will be the Windows 11. But factually, Redstone does not seem to be the name of Windows 11, as this updation has come out quickly from the Microsoft side so it could not be the proper windows that we can call Windows 11. It is merely a larger update than usual updates of Microsoft, which will provide functionality and support for new classes of device.

Monday, 14 December 2015

How to Achieve 20Gb and 30Gb Bandwidth through Network Bonding


The cost of 10GbE networking has dropped dramatically in the last two years. Plus, getting 10GbE to work has simplified as hardware and drivers have matured. This is to the point that anyone who has set up a 1GbE network can set up a 10GbE network. 

And if you are used to 1GbE, you'll love the extra bandwidth in 10GbE. I figure it's sort of like buying one of those new Mustangs ... with the 5.0 liter engine, not that I own one :).

But, if you're like me, faster is never fast enough. The good news for us speed junkies is that it is easy to get lots more from your new 10GbE network. All you have to do is install more than one port, and bind them together.

I've been doing this at 45 Drives in order to test our Storinator storage pods (since they can easily read and write at beyond 2 gigabytes per second, thus easily saturating a single or even double 10GbE connections). My post today will share my work in network bonding, and I'll show how I create 20GbE and 30GbE connections.

OUR SETUP IN THE 45 DRIVES R&D LAB

At the heart of our 10GbE network, we have a Netgear XS708E Unmanaged Switch. It is an 8-port switch that we purchased for $819 USD, which gives a price tag of ~$100/port. That’s quite cheap when compared to other 10GbE switches from other typical vendors such as Cisco, Arista, Dell, which can range between anywhere from $400/port to $1000/port. We've found it to be flawless for our lab work, and capable of transferring data at its rated capacity.

Each Storinator pod we used in our network was equipped an Intel X540-T2 10GbE network adapter which costs $500 USD each. However, there are other less expensive cards out there like the Supermicro AOC-STG-i2T that we have used and found adequate.  

Finally stringing all the hardware together, we used Cat 6 network cables. We've found no performance issues for the short runs in our lab, but we'd suggest Cat 6a or Cat 7, for working installations.

Setting up the network was easy, as it is the same as a 1 Gigabit Network; just plug each machine you wish to be on the network into the switch using your preferred network cables. Any OS that we offer will automatically pick up the 10GbE NIC Card, and will display the connected interface.

For simplicity’s sake, I plugged our DHCP server into the 10GbE switch so that that the switch will automatically assign IPs to all connected interfaces. See the diagram below illustrating my setup.


Diagram of 45 Drives Network Set Up

Now that my network is all strung together, and my server/clients can talk to each other through the 10GbE pipe, let’s move on to the fun part…testing the bandwidth of the new network!

TESTING THE BANDWIDTH

The way network bandwidth was first explained to me was that it was like a highway. The more lanes you have on a highway, the more traffic is able to travel at a high rate of speed. It works the same way with clients on a network transferring files to, and from, a storage server. The more bandwidth (lanes) you have, the more users you can have transferring files more quickly. 

To test the bandwidth in all of my experiments, I used iperf, a well-known free network benchmark utility. Iperf quantifies the network bandwidth so we can verify that we have a 10 Gigabit connection between machines on our network. I like it because it works seamlessly in every OS.


Iperf output of Storinator Client #3 connected to Host Storinator Storage Server on a 10 Gigabit Network.

EXPANDING ON 10 GIGABIT

A 10GbE network is fast, but our Storinator storage pods are faster. We have customers who are regularly exceeding read and write speeds beyond 2GBytes per second, and we achieve the same results in the lab. To move all of this data in and out of a pod requires connectivity that is two or three times what a single 10GbE connection offers.   

So I wanted to push the limits of our 10GbE by experimenting with network bonding. There are other terms to describe this process, such as NIC teaming or link aggregation. To achieve network bonding, you set up multiple 10GbE connections from your machine to your switch, and tie them together. It can accomplish different results, including a bandwidth that is the sum of your connections. 

Linux makes network bonding easy by offering a built-in module in the kernel, and the behavior of the bonded interface depends on which mode you choose. There are seven modes that come built in to the bonding module, each having their own unique feature. Generally speaking, modes provide either fault tolerance or load balancing services or a combination of the two. Each mode does have its drawbacks, so it is important to select the mode that best suits your application.

What sparked my interest was the fact that, theoretically, using the round-robin mode policy (mode 0 in Linux), we could double the network bandwidth between our machines with no additional hardware, just extra network cables.

HOW TO NETWORK BOND

Since the network bonding driver is pre-built into the Linux kernel, there is no need to install any extra packages. Conveniently, the switch does not need to be configured in any way for the round-robin policy.

In order to bond the network ports in CentOS, the first thing needed is to create a master bond config file in the network-scripts directory (e.g. /etc/sysconfig/network-scripts/ifcfg-bond0). This config file contains information like device name, IP address, subnet, bonding mode, etc. Most of this is entirely up to the user. It is important to note that I used mode 0 round-robin policy. Please see our technical wiki on implementing network bonding in CentOS for detailed information.

Next, we need to modify the existing individual network connections in order to make them part of a bonded interface. The un-bonded network interfaces are picked up on boot, so its just a matter of making a quick edit to each interfaces config file. Please see my technical wiki article showing you exactly what to put into yournetwork config files.

FreeNAS is much more straightforward than CentOS. Upon boot, FreeNAS will give you a list of options from 1-14:
  • Select Option 2 for 'Link Aggregation'.
  • Next, select the option to 'Create Link Aggregation'.
  • Select a bond protocol from 1-6. In FreeNAS, Option 5 will give the round-robin policy, but remember you can select any mode you wish, depending on your application.
  • You will be prompted to select your interfaces to be used in the bond. Select each individual interface you would like to bond.
  • A reboot is required so implement these changes.
  • After reboot, you can select Option 1 to configure the newly formed bond Interface just as if you were to configure a normal interface.
Aditionally, FreeNAS makes this process easier by adding a webGUI feature to allow you to bond interfaces. This is also documented on our wiki in the NAS Appliance Section.
Now that I have bonded my network interfaces using round-robin mode in hopes of doubling my bandwidth, let's see if it worked!

RESULTS AND OPTIMIZATION

I tested the bandwidth achieved through network bonding in our lab, using the Host Storinator and Storinator Client #1, both running CentOS 7, each with a single Intel X540-T2 NIC Card to our Netgear XS708E switch. 

I then ran iperf and saw our bandwidth was around 11.3 Gigabits/s.


Iperf output of two interfaces bonded without any network tuning.
I was puzzled by this since I was expecting a number closer to 20 Gigabits/s, so I spent some time trying to tune our network. Some reading told me that the default TCP windows sizes resulted in poor performance when using the much newer 10GbE infrastructure so I then played with various TCP window sizes, but only saw further performance degradation. 

Further investigation revealed the optimal TCP window size is directly related to the bandwidth-delay product, which is the product of the network speed (in bits per second) and its round-trip delay time (in seconds). When TCP was first defined (1974) the optimal window size was selected based on the current network speeds. Since then, however, we have dramatically increased our network speeds and so we therefore need to re-evaluate the optimal TCP window size. Luckily, it turns out that Linux (and most other OS) scales the TCP window size depending on your connection speed, therefore no tuning is required and the ideal window size will be determined by the kernel.

My next thought was to use jumbo frames rather than the default 1500bytes. Put another way, I considered how size of the packet of data that was being transmitted across the network. This is technically called MTU (Maximum Transmission Unit). Like the TCP window size, as network speeds increased as small MTU became less efficient. 

Changing to jumbo frames did indeed make an improvement, increasing our bandwidth from 11.3 to 13.9Gbits. Because of that, I always keep the MTU at 9000 bytes on a 10GbE network. (A side note to keep in mind: when making this change yourself, make sure all of your components support jumbo frames. Most components made today have this support.)


Iperf output of two interfaces bonded using jumbo frames.

Still stumped as to why I wasn’t seeing 20 Gigabits/s, I decided to place two Intel NIC Cards in each machine and bond 2 interfaces to the switch, one interface per card. 

This resulted in success – I had a 20 Gigabit Connection!


Iperf output of a 20 Gigabit connection with two NIC Cards.

It turns out while the Intel cards are 10 Gigabit NICs, they cannot handle 10 Gigabit out of each port at once, only 14 Gigabit between the two (the bottleneck must be within the card architecture itself or the PCI connection). 

Three NIC Cards with 3 interfaces will give you bandwidth just shy of 30 Gigabits/s. That is a very large highway! 


Iperf output of a 30 Gigabit connection using 3 interfaces.
However, I believe that if you are trying to achieve bandwidth of 30Gb, you are better off using two NIC cards with two interfaces each, offering ~14Gb per card. 

This way, you also achieve better redundancy, and your costs are lower compared to having three NIC cards.

CONCLUSION

While this post touched on how easy and inexpensive it is to set up a 10GbE network, my main focus was to share my experience with setting up a 20GbE network through network bonding.

In doing my experiments, I learned a few key things: 

  • It is easy to expand your 10 Gigabit Network and provide link redundancy through Network Bonding. Check out our technical wiki for a comprehensive guide for setting a bonded interface in both CentOS and FreeNAS.
  • Not all network cards are created equal, as the ones I used cannot do 10 Gigabits/s per port. All I could achieve was a sum of 14 Gigabits/s per card.
  • Jumbo frames make a significant difference in terms of bandwidth, however, it’s best to leave the TCP window size alone, as the OS will scale the window size according to your network 

How to Decide on the Best RAID Configuration For You

A common question we get asked here at 45 Drives is, “What RAID should I use with my Storinator?”

Our answer: “What are you trying to do?”

Choosing which RAID level is right for your application requires some thought into what is most important for your storage solution. Is it performance, redundancy or storage efficiency? In other words, do you need speed, safety or the most space possible?

This post will briefly describe common configurations and how each can meet the criteria mentioned above. Please note I will discuss RAID levels as they are defined by Linux software RAID “mdadm”. For other implementations, such ZFS RAID, the majority of this post will hold true; however, there are some differences when you dig into the details. These will be addressed in a post to come! In the meantime, check out the RAIDZ section of our configuration page for more information.

Standard RAID Levels

Let’s start with the basics, just to get them out of the way. There are a few different RAID configs available, but I am only going to discuss the three that are commonly used: RAID 0, RAID 1 and RAID 6. (But if you want more information, see Standard RAID Levels).

A RAID 0, often called a “stripe,” combines disks into one volume by striping the data across all the disks in the array. Since you can pull files off of the volume in parallel, there is a HUGE performance gain as well as the benefit of 100% storage efficiency. The caveat, however, is since all the data is spread across multiple disks, if one disk fails, you will lose EVERYTHING. This is why one of our friends in the video production industry likes to call RAID 0 the “Scary RAID”. Typically, a RAID 0 is not used alone in production environments, as the risk of data loss usually trumps the speed and storage benefits.
A RAID 1 is often called a “mirror” – it mirrors data across (typically) two disks. You can create 3 or 4 disk mirrors if you want to get fancy, but I won’t discuss that here, as it is more useful for boot drives rather than data disks (let us know if you’d like a blog post on that in the future). RAID 1 gives you peace of mind knowing that you always have a complete copy of your data disk should one fail, and short rebuild times if one ever does. The caveat this time around is that storage efficiency is only 50% of raw disk space, and there is no performance benefit like the RAID 0. In fact, there is even a small write penalty as every time something needs to be written to the array, it has to write it twice, once to both disks.
A RAID 6 is also known as “double parity RAID.” It uses a combination of striping data and parity across all of the disks in the array. This has the benefit of striped read performance and redundancy, meaning you can lose up to 2 disks in the array and still be able to rebuild lost data. The major caveat is that there can be a significant write penalty (although, bear in mind, the Storinator's read/write performance is pretty powerful, so the penalties may not be a huge concern).

This write penalty arises from the way the data and parity is laid out across the disks. One write operation requires the volume to read the data, read the parity, read the second parity, then write the new data, write first new parity and then the second parity. This means for every one write operation, the underlying volume has to do 6 IOs. Another issue is that the time for initial sync and rebuild can become large as the array size increases (for example, once you get around 100TB in an array, initial sync and rebuild times can get in the 20-hour range pretty easily). This bottleneck arises from computation power, since Linux RAID "mdadm" can only utilize one core of a CPU. Storage efficiency in a RAID 6 depends on how many disks you have in the array. Since two disks must always be used as parity, as you increase the number of disks, the penalty becomes less noticeable. The following equation represents RAID 6 storage efficiency in terms of number of disks in the array (n).


Personally, I like to stay away from making a single RAID 6 too large as rebuild times will start to get out of control. Also, statistically,you are prone to more failures as the number of disks rise and double parity may no longer cut it. In the section below, I discuss how to avoid this issue by creating multiple smaller RAID 6s and striping them.

Nested RAID Levels

Building off the basics, what is typically seen in the real world is nested RAID levels; for example, a common configuration would be a RAID 10.

A RAID 10 can be thought of as a stripe of mirrors – you get the redundancy, since each disk has an exact clone, as well as the improved performance since you have multiple mirrors in a stripe. Only downside to a RAID 10 is the storage efficiency is 50%, and it shares the same 2 IOs for every 1 write penalty, like a simple RAID 1. For more information and how to build one check out our RAID 10 wiki section.
Another interesting nested RAID is a RAID 60, which can be thought of as a stripe of RAID 6s. You get the solid redundancy and storage efficiency of a RAID 6 along with better performance, depending how many you stripe together. In fact, depending on how many RAID 6s you put into the stripe, the amount of drives that can fail before total loss increases to m*2 where m is the number of RAID 6s in the stripe. Keep in mind though, as you increase the number of arrays in a stripe, the space efficiency will decrease. The following equation gives the storage efficiency of a RAID 60 in terms of total number of drives in the system (N) and the number of RAID 6s in the stripe (m). Our wiki has more information and instructions on how to build RAID 60.




CONCLUSIONS

To tie things back into how I began, now that we’ve discussed various RAID levels and their pros and cons, we can make some conclusions to determine what the best choice is for your application.

Performance:

If you need performance above all else and you don’t care about losing data because proper backups are in place, RAID 0 is the best choice hands down. There is nothing faster than a RAID 0 configuration and you get to use 100% of your raw storage.

If you need solid performance but also need a level of redundancy, RAID 10 is the best way to go. Keep in mind that you will lose half your usable storage, so plan accordingly!

Redundancy:

If redundancy is most important to you, you will be safe choosing either a RAID 10 or a RAID 60. It is important to remember when considering redundancy that a RAID 60 can survive up to two disk failures per array, while a RAID 10 will fail completely if you lose two disks from the same mirror. If you are still unsure, the deciding factor here is how much storage you need the pod to provide. If you need a lot of storage, go the RAID 60 route. If you have ample amounts of storage and aren’t worried about maxing out, take the performance benefit of the RAID 10.

For a RAID 60, you will have 6 IO for every one write IO, while with a RAID 10 you only have 2 IOs for every 1 write IO. Therefore, a RAID 10 is faster than a RAID 60.

Perspective on Speed

When talking about write penalties, keep in mind that the Storinator is a powerful machine with impressive read/write speeds, so even if the RAID you go with has a greater write penalty, you will be more than satisfied with your storage pod's performance. For more information, please review our RAID performance levels

Space Efficiency:

If packing the pod as full as possible is the most important thing to you, you will most likely want to use a RAID 60 with no more than 3 arrays in the stripe (4 in a XL60, and 2 in a Q30). This will give you a storage efficiency of 86%. You could also use a RAID 0 and get 100% of raw space, but this is really not recommended unless you have solid backup procedure to ensure you don’t lose any data.

I hope this will help ease the decision of picking the right RAID configuration and don’t forget to check our technical wiki for more information, especially the configuration page detailing how to build all the RAIDs mentioned in post as well as some performance numbers.

MCP Implementing an Advanced Server Infrastructure (70-414) – another study guide

Now to prepare seriously this certification, here is a lot of content to read and understand !! Like every other Microsoft Certification, a technical background and experience on Microsoft Infrastructure (Windows Server 2003 –> 2012, Cluster and System Center) is better to have.
Official link on Microsoft Web site : http://www.microsoft.com/learning/en-us/exam-70-414.aspx
******************************************** 
Manage and maintain a server infrastructure (25–30%) 
********************************************
- Design an administrative model - 
-> Design considerations including user rights, built-in groups, and end-user self-service portal; design a
delegation of administration structure for Microsoft System Center 2012
How to Create a Delegated Administrator User Role in VMM http://technet.microsoft.com/en-us/library/hh356037.aspx
- Design a monitoring strategy - 
-> Design considerations including monitoring servers using Audit Collection Services (ACS), performance
monitoring, centralized monitoring, and centralized reporting; implement and optimize System Center 2012 –Operations Manager management packs; plan for monitoring Active Directory
Agentless Monitoring in Operations Manager http://technet.microsoft.com/en-us/library/hh212910.aspx
Well-known security identifiers in Windows operating systems  (Event Log Readers group) http://support.microsoft.com/kb/243330/en-us
SQL Server Reporting Services (SSRS)
Defining a Service Level Objective Against an Application http://technet.microsoft.com/en-us/library/hh230719.aspx
- Design an updates infrastructure - 
-> Design considerations including Windows Server Update Services (WSUS), System Center 2012 – Configuration 
Manager, and cluster-aware updating; design and configure Virtual Machine Manager for software update management; update VDI desktop images
WSUS topology designs 
- Single WSUS server 
- Multiple independent WSUS servers 
- Multiple internally synchronized WSUS Servers (1 upstream and multiple downstream servers) 
- Disconnected WSUS Servers
Deploy Replica when you want a server to inherit update approvals from a central server
Windows Internal Database Feature or SQL Server 2008 (or >)
How to Add an Update Server to VMM http://technet.microsoft.com/en-us/library/gg675116.aspx 
–> Add WSUS Console to VMM Server
- Implement automated remediation - 
-> Create an Update Baseline in Virtual Machine Manager; implement a Desired Configuration Management (DCM) 
Baseline; implement Virtual Machine Manager integration with Operations Manager; configure Virtual Machine Manager to move a VM dynamically based on policy; integrate System Center 2012 for automatic remediation into your existing enterprise infrastructure
Overview of Desired Configuration Management http://technet.microsoft.com/en-us/library/bb680553.aspx
Local Storage vs Remote Storage
WSUSUtil tool to configure SSL if used with SCCM
How to Install a WSUS Server for VMM http://technet.microsoft.com/en-us/library/gg675099.aspx
If you install WSUS on a remote server, you must install a WSUS Administration Console on the VMM management server and then restart the VMM service.With a highly available VMM management server, you must install a WSUS Administration Console on each node of the cluster to enable the VMM service to continue to support update management. Update management in VMM requires a WSUS Administration Console, which includes the WSUS 3.0 Class Library Reference.
System Requirements: Update Management http://technet.microsoft.com/en-us/library/gg610633.aspx
cluster-aware updating 
- Remote-updating mode 
- Self updating mode
Windows Server 2012 – Cluster Aware Updating (CAU) in action (few french text but a lot of screenshot in US) http://blogs.technet.com/b/stanislas/archive/2013/01/14/windows-server-2012-cluster-aware-updating-cau-en-action.aspx
Virtual Machine Servicing Tool (VMST) –> need a WSUS or SCCM server in your infrastructure
Introduction to Compliance Settings in Configuration Manager http://technet.microsoft.com/en-us/library/gg682139.aspx
Introduction to Collections in Configuration Manager http://technet.microsoft.com/en-us/library/gg682177.aspx
*********************************************************** 
Plan and implement a highly available enterprise infrastructure (25–30%) 
***********************************************************
- Plan and implement failover clustering - 
-> Plan for multi-node and multi-site clustering; design considerations including redundant networks, 
network priority settings, resource failover and failback, heartbeat and DNS settings, Quorum configuration, and storage placement and replication
Windows Server 2012: Improvements in Failover Clustering (Video 56min) http://technet.microsoft.com/en-us/video/windows-server-2012-improvements-in-failover-clustering.aspx
What’s New in Failover Clustering in Windows Server 2012 http://technet.microsoft.com/en-us/library/hh831414.aspx
Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster http://technet.microsoft.com/en-us/library/jj612870.aspx
witness disk in NTFS only
4 quorums node 
- node majority 
- node and disk majority 
- node and file sahre majority 
- no majority
Failover if 5 missed heartbeat (= 5 sec)
Installing the Failover Cluster Feature and Tools in Windows Server 2012 http://blogs.msdn.com/b/clustering/archive/2012/04/06/10291601.aspx
Cluster Shared Volumes Reborn in Windows Server 2012: Deep Dive http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/WSV430
- Plan and implement highly available network services - 
-> Plan for and configure Network Load Balancing (NLB); design considerations including fault-tolerant
networking, multicast vs. unicast configuration, state management, and automated deployment of NLB usingVirtual Machine Manager service templates
- Plan and implement highly available storage solutions - 
-> Plan for and configure storage spaces and storage pools; design highly available, multi-replica DFS
namespaces; plan for and configure multi-path I/O, including Server Core; configure highly available iSCSITarget and iSNS Server
The Microsoft iSNS Server only supports the discovery of iSCSI devices, and not Fibre Channel devices
1 disk mini to create a storage pool 
2 disks mini to create a resilient mirror virtual disk (standalone server) 
3 disks mini to create a resilient 2-way mirror virtual disk (Cluster Deploy) 
5 disks mini to create a resilient 3-way mirror virtual disk (Cluster Deploy) 
3 disks mini to create a resilient parity virtual disk (standalone server, can’t use it on a failover
cluster)
Deploy Storage Spaces on a Stand-Alone Server http://technet.microsoft.com/en-us/library/jj822938.aspx
Provisioning : thin (flexible) ou fixed (better performance)
Clustered Storage space: 
- Fixed provisioning 
- SAS disks only 
- No parity (only simple or mirror virtual disk) 
- ReFS not allowed (CSV incompatible)
- Plan and implement highly available server roles - 
-> Plan for a highly available Dynamic Host Configuration Protocol (DHCP) Server, Hyper-V clustering,
Continuously Available File Shares, and a DFS Namespace Server; plan for and implement highly availableapplications, services, and scripts using Generic Application, Generic Script, and Generic Service clustering roles
Scale-Out File Server for Application Data Overview http://technet.microsoft.com/en-us/library/hh831349.aspx
up to 64 physical nodes in a cluster 
4000 VM per cluster
Cluster-Aware Updating 
Cluster computer objects in targeted OU
Step-by-Step: Configure DHCP for Failover http://technet.microsoft.com/en-us/library/hh831385.aspx
- Plan and implement a business continuity and disaster recovery solution - 
-> Plan a backup and recovery strategy; planning considerations including Active Directory domain and forest
recovery, Hyper-V replica, domain controller restore and cloning, and Active Directory object and containerrestore using authoritative restore and Recycle Bin
DPM -> 15 min RPO
AD DS Recycle Bin : forest level 2008 R2
Requirements for Active Directory Recycle Bin http://technet.microsoft.com/en-us/library/dd379484(v=ws.10).aspx
Enable Active Directory Recycle Bin http://technet.microsoft.com/nl-nl/library/dd379481(v=ws.10).aspx 
Enable-ADOptionalFeature
DPM to Backup Virtual Machines 
- Protection of a standalone host -> DPM Agent on Hyper-V 
- Protection of the virtual machine –> DPM Agent in VM 
- Protection of a VM running on ta clustered host –> DPM agent on all Cluster Node 
- Host Hyper-V and storage located on different servers -> DPM agents on both server. backup occur at host
level
Hyper-V: To participate in replication, servers in failover clusters must have a Hyper-V Replica Broker
Understand and Troubleshoot Hyper-V Replica in Windows Server “8” Beta http://www.microsoft.com/en-us/download/details.aspx?id=29016
****************************************************** 
Plan and implement a server virtualization infrastructure (25–30%) 
******************************************************
- Plan and implement virtualization hosts - 
-> Plan for and implement delegation of virtualization environment (hosts, services, and VMs), including 
self-service capabilities; plan and implement multi-host libraries including equivalent objects; plan for and implement host resource optimization; integrate third-party virtualization platforms
How to Configure Host Group Properties in VMM http://technet.microsoft.com/en-us/library/hh335101.aspx
Configuring Dynamic Optimization and Power Optimization in VMM http://technet.microsoft.com/en-us/library/gg675109.aspx
The Hyper-V Administrators group is a new local security group. Add users to this group instead of the localAdministrators group to provide them with access to Hyper-V. Members of the Hyper-V Administrators havecomplete and unrestricted access to all features of Hyper-V
System Requirements: Citrix XenServer Hosts http://technet.microsoft.com/library/gg610587.aspx
- Plan and implement virtualization guests - 
-> Plan for and implement highly available VMs; plan for and implement guest resource optimization including
smart page file, dynamic memory, and RemoteFX; configure placement rules; create Virtual Machine Managertemplates
How to Create a Guest Operating System Profile http://technet.microsoft.com/en-us/library/hh427296.aspx
- Plan and implement virtualization networking - 
-> Plan for and configure Virtual Machine Manager logical networks; plan for and configure IP address and 
MAC address settings across multiple Hyper-V hosts including IP virtualization; plan for and configure virtual network optimization
- Plan and implement virtualization storage - 
-> Plan for and configure Hyper-V host storage including stand-alone and clustered setup using SMB 2.2 and
CSV; plan for and configure Hyper-V guest storage including virtual Fibre Channel, iSCSI, and pass-throughdisks; plan for storage optimization
Note : SMB 2.2 is an old name. New name is SMB 3.0
- Plan and implement virtual guest movement - 
-> Plan for and configure live, SAN, and network migration between Hyper-V hosts; plan for and manage P2V
and V2V
- Manage and maintain a server virtualization infrastructure - 
-> Manage dynamic optimization and resource optimization; manage Operations Manager integration using PRO
Tips; automate VM software and configuration updates using service templates; maintain library updates
Configuring Dynamic Optimization and Power Optimization in VMM http://technet.microsoft.com/en-us/library/gg675109.aspx
Adding and Configuring VMM Library Servers http://technet.microsoft.com/en-us/library/bb894355.aspx
************************************************** 
Design and implement identity and access solutions (20–25%) 
**************************************************
- Design a Certificate Services infrastructure - 
-> Design a multi-tier Certificate Authority (CA) hierarchy with offline root CA; plan for multi-forest CA
deployment; plan for Certificate Enrollment Web Services; plan for network device enrollment; plan forcertificate validation and revocation; plan for disaster recovery; plan for trust between organizations
Active Directory Certificate Services Overview (to learn different roles in AD CS) http://technet.microsoft.com/en-us/library/hh831740.aspx
CEP Encryption : Allows the holder to act as a registration authority (RA) for simple certificate enrollmentprotocol (SCEP) requests
The CAPolicy.inf contains settings that can be used to modify the default installation of the Certification Authority role of Active Directory Certification Service (AD CS). The file is also used when renewing the CA certificate. A CAPolicy.inf file is not required to install AD CS or renew a CA certificate. The file is only needed to modify default settings. Once you have created your CAPolicy.inf file, you must copy it into the %windir% folder (such as the C:\Windows) of your server before you install AD CS or renew the CA certificate.
Cross-certification creates a shared trust between two CAs that do not share a common root CA. These CAsexchange cross-certificates that allow their organizations to communicate. In this way, the organizations do not have to create and manage additional root CAs. Cross-certification might be the best option if a common root CA for both PKIs does not exist
- Implement and manage a Certificate Services infrastructure - 
-> Configure and manage offline root CA; configure and manage Certificate Enrollment Web Services; configure
and manage Network Device Enrollment Services; configure Online Certificates Status Protocol responders;migrate CA; implement administrator role separation; implement and manage trust between organizations;monitor CA health
Using a Cross-Certification Configuration http://technet.microsoft.com/en-us/library/cc778829(v=ws.10).aspx
- Implement and manage certificates - 
-> Manage certificate templates; implement and manage deployment, validation, and revocation; manage
certificate renewal including Internet-based clients; manage certificate deployment and renewal to networkdevices; configure and manage key archival and recovery
- Design and implement a federated identity solution - 
-> Plan for and implement claims-based authentication including planning and implementing Relying Party
Trusts; plan for and configure Claims Provider Trust rules; plan for and configure attribute stores including Active Directory Lightweight Directory Services (AD LDS); plan for and manage Active Directory Federation Services (AD FS) certificates; plan for Identity Integration with cloud services
Attribute Store in ADFS is a directory or database that you can user to store user accounts and their associated attributes. Attibutes stores for ADFS in Windows Server 2012 can be : 
- AD DS 
- AD LDS (LDAP) 
- SQL Server 2005 and > 
- Custom attribute store (eg. CSV files)
- Design and implement Active Directory Rights Management Services (AD RMS) - 
-> Plan for highly available AD RMS deployment; manage AD RMS Service Connection Point; plan for and manage 
AD RMS client deployment; manage Trusted User Domains; manage Trusted Publishing Domains; manage Federated  Identity support; manage Distributed and Archived Rights Policy templates; configure Exclusion Policies; decommission AD RMS
AD RMS Infrastructure Deployment Tips http://technet.microsoft.com/en-us/library/jj554774.aspx
Only one Active Directory Rights Management Services (AD RMS) root cluster is permitted in each forest. If your organization wants to use rights-protected content in more than one forest, you must have a separate AD RMS root cluster for each forest.
Service Connection Point (SCP) for Active Directory Rights Management Services (AD RMS) identifies theconnection URL for the service to the AD RMS-enabled clients in your organization. After you register the SCP in Services de domaine Active Directory (AD DS), clients will be able to discover the AD RMS cluster to request use licenses, publishing licenses, or rights account certificates (RACs).
The Active Directory Rights Management Services (AD RMS) super user feature is a special role that enablesusers or groups to have full control over all rights-protected content managed by the cluster. Its members are granted full owner rights in all use licenses that are issued by the AD RMS cluster on which the super users group is configured. This means that members of this group can decrypt any rights-protected content file and remove rights-protection from
What’s New in Active Directory Rights Management Services (AD RMS)? http://technet.microsoft.com/en-us/library/hh831554.aspx
for Windows Server 2012 the following versions of Microsoft SQL Server have been tested and are supported for use with AD RMS deployment. 
- SQL Server 2005 Service Pack 3 
- SQL Server 2008 Service Pack 3 
- SQL Server 2008 R2 Service Pack 1
If you are going to be viewing reports related to AD RMS, you must also install the .NET Framework 3.5 On Server Core installations, the optional Identity Federation Support role service for the AD RMS server role is not supported. This is because Identity Federation Support relies on a role service of the AD FS Server role, the Claims-aware Agent, which is disabled on Server Core installations Windows Server 2012 also includes the following feature updates, which have been added recently as updates
for the AD RMS role in Windows Server 2008 R2. 
- Simple delegation : Simple delegation for AD RMS enables you to have the same access rights to protected
content that are assigned to one person delegated to other individuals within their organization Simple delegation provides the ability to have content rights assigned to executives and managers be easily and effectively delegated to their assistants.wo attributes, msRMSDelegator and msRMSDelegatorBL must be added to the Active Directory schema 
- Strong cryptography : enables you to increase the cryptographic strength of your AD RMS deployment by
running in an advanced mode known as cryptographic mode
Test Lab Guide: Deploying an AD RMS Cluster http://technet.microsoft.com/en-us/library/adrms-test-lab-guide-base
I encourage you also to download Windows Server 2012, install it and test it as much as you can because there are some questions where you need to have already manipulate User Interface or commands.
You can download eval version of Windows Server 2012 as : 
- an ISO image : 
http://aka.ms/jeveuxwindows2012 
- a pre-build system on VHD : http://aka.ms/jeveuxwindows2012
You can also try Windows Server 2012 on Windows Azure IaaS for some scenarios (but not those with hyper-V or network like DHCP of course) : https://www.windowsazure.com/fr-fr/pricing/free-trial/

The Future of Remote Work, According to Startups

  The Future of Remote Work, According to Startups No matter where in the world you log in from—Silicon Valley, London, and beyond—COVID-19 ...