Make your own TV channel

Make your own TV channel

Educational Youtube-style projects are quickly becoming a popular way to incorporate creative technologies into curricular and extra-curricular activities, and it’s easier than you may think to set up your own channel. 

There are many companies offering schools the equipment, platforms and training required to create a multimedia broadcasting station that allows users to upload content for viewing and rating by students and teachers. Sites are completely customisable to each school, from the colours and branding to security levels so that only those with a valid username and password have access.

One school using this technology to great effect is Wildern School in Southampton. Katie Broadribb, a teacher at Wildern, spoke at BETT about how the school was looking for a way in which they could develop an innovative project they called ‘EduTube’ – a safe and secure website where the students could upload video, audio and photography to share with fellow students. Wildern were winners of the Becta Best Whole School (Secondary) award in 2007, and had received funding from DCSF’s Innovation Unit to develop this idea as part of a project to bring web 2.0 technologies to schools.

Displaying students’ creative talents

Wildern School enlisted the help of Trilby Multimedia’s ‘Trilby TV’ platform and, in 2007, set up Wildern TV. The site acts as a platform for the students to display their creative talents and also helps to develop students’ (and teachers’) understanding of ICT, media skills, team work and real world issues encountered in the creative industries, such as copyright and funding.

Trilby TV is a fully managed system; the Trilby team take care of any hardware or software issues. When it comes to website content and moderation, the school has the freedom to decide on the best approach. Wildern School decided that Wildern TV was to be a site for the students by the students, and so set up an extra-curricular group called ‘The Wildern Moderators’ who decided what could or could not be uploaded.  Training was given to the moderators by local production companies and regional TV & Radio stations. The students also formed a separate extra-curricular group called ‘The Wildern Producers’, who were all given training in production, lighting, editing and sound.

Engaging students

Since setting up Wildern TV, the school has reported high levels of student engagement and it is even capturing the imagination of disruptive students. It is a platform that anyone in the school can contribute to and its popularity amongst the students means that new content is constantly being submitted.

If you do decide to invest in equipment to set up your own channel, it is important not to leave your teachers in the dark about how to use it. Get everyone involved, from all faculties, and show them how your TV channel can benefit their department. At Wildern School, the students were given a (probably well deserved) inset day, whilst the teachers were split into their subject faculties and set the task of making a film about their own department. This not only taught the teachers how to use the technology, but also demonstrated how fun it can be to make a film; the teachers could now relate to the students’ enthusiasm for multimedia-based projects.

Wildern TV has just reached the end of its first full year of broadcasting and, to celebrate, the school held their very own ‘Wildern Oscars’, a full-on glitzy award ceremony honouring the best films that had been submitted over the course of the year. This could be the start of a great annual tradition at the school and has been well received by the students. With the new incentive to work towards next year’s awards, the standard of film making can only get better.

If you are interested in introducing a project like this to your school, all you need is a dedicated server to run the unique web platform software and store uploaded footage, a suite of powerful machines suitable for video and audio work, and students with a streak of creative flair. If you want to add that little bit extra, why not also look at displaying some plasma screens around your school so that your TV channel can be broadcast throughout the day for all to see? The Apple Macbook and iMac are ideal for producing high quality content in schools. They come pre-installed with Apple’s acclaimed iLife suite, which makes the Mac a movie-making, audio-producing, photo-editing, website-creating, DVD-authoring machine straight out of the box.

To find about more about how to set up your own TV channel, give us a call on 03332 409 333 or email us at learning@Jigsaw24.comFor more news on technology in Education, follow @Jigsaw24Edu on Twitter and ‘Like’ Jigsaw Education’s Facebook page.

Sound for Picture VI: Audio post-production systems

Sound for Picture VI: Audio post-production systems

When it comes to building an ‘audio for post-production’ suite, things can quickly become difficult. This is especially true if you’re looking to get Dolby Premier Studio certification for your facility, in which case every single aspect of the build will have to fall within a strict set of criteria. Very few facilities are accredited as being Dolby Premier Studios, but that’s not to say that the UK doesn’t have any spectacular post-production houses. If you ever find yourself in the heart of Soho, you’ll immediately notice that many of the UK’s most prestigious production companies are based there. Interestingly, though, post-production is one of the few audio industries that is expanding across the UK – new post-production houses are being established primarily outside of Soho in Bristol, Birmingham, as far north as Edinburgh, and especially in Manchester which has, in recent years, seen a tremendous influx of new post facilities.

With this in mind, it seems apt to dedicate a part of this Sound for Picture series to what actually goes in to a post-production suite. Of course, there is by no means a ‘one size fits all’ approach to building a production studio, and if you’re looking into setting up a new facility then I would highly recommend you give us a call so we can help you with the process from beginning to end. With that said, this section of the Sound for Picture series will look at post-production systems in general, focusing on the solutions widely accepted to be the “industry standard”.

Digidesign Pro Tools|HD and Avid Media Composer

No discussion of audio post-production systems would be complete without mentioning Digidesign’s Pro Tools|HD and ICON console systems. Since their release, the ICON consoles have become the heart of audio post and film production and are widely accepted as the standard to aspire to. Arguably, no other software and hardware combination offers the same level of editing and mixing functionality, sound quality, integrated video options and session management – of course, different users and manufacturers in particular will have varying opinions on this based upon their experience and the products they’re charged with promoting, but for me Avid and Digidesign win the integration battle hands-down.

There are, of course, a number of solutions for audio production and video work when it comes to software applications and hardware integration. Essentially it all comes down to which package you feel most comfortable using and how it integrates into your workflow. After many years of working in recording studios running Pro Tools, that’s the system I’m most comfortable with and, if pushed to make a decision, it’s the DAW I’d choose every single time.

While I must stress that this is my own opinion based upon my own experience in a professional environment, I find that the speed and efficiency of the Pro Tools and ICON integration makes the decision-making process much easier – when working to a budget and deadline, making production decisions and having confidence in the tools you work with is paramount. What’s more is that since Digidesign is a part of Avid Technology, Pro Tools is able to offer seamless integration with Avid’s world-class video editing and storage solutions, allowing you to work with Avid standard definition (SD) and high definition (HD) video resolutions when editing to picture.

One of the most interesting developments from Avid and Digidesign is the new Video Satellite system that debuted in the closing months of 2008 and was demonstrated in full at this year’s NAB exhibition is Las Vegas. Video Satellite is a software option that allows Pro Tools|HD editors to quickly and easily play Avid HD or SD video sequences from a dedicated Windows-based computer running Avid Media Composer software in sync with their Pro Tools session. By sending the video workload to a separate Windows-based PC (synced over Ethernet), there is no need to render effects, transcode video, or copy files in your DAW machine. You can therefore maintain the full audio track count and processing power of Pro Tools|HD.

For Video Satellite…

–   HP Avid-Certified XW8600 Quad Xeon, 4GB RAM
–   Media Composer-based Video Satellite for Digidesign Pro Tools
–   Avid Media Composer Nitris DX
–   Avid VideoRAID SR attached storage
–   Avid RS422 Deck Control cable for PC
–   Sony HVR 1500A HDV VTR
–   Sony LMD-4250W 42″ HD LCD Monitor
–   Netgear GS108 ProSafe 8 Port Gigabit Ethernet Switch
–   Avid Unity MediaNetwork / ISIS shared storage
–   Kramer SG6005 Black Burst/Bar/Audio Generator

For audio production…

–   2009 Mac Pro 8 Core 2.93GHz, 6GB RAM, 1TB HDD, NVIDIA GeForce GT120
–   Up to 3GB additional internal storage
–   Airport card
–   Apple 30″ Cinema Display
–   Digidesign ICON D-Command ES
–   Digidesign ICON D-Command ES Fader Module
–   JLCooper Surround Panner
–   Digidesign Pro Tools|HD3
–   Digidesign 192 I/O
–   Digidesign SYNC HD
–   Digidesign Digitranslator
–   Digidesign MachineControl for Mac
–   Glyph attached storage
–   KeySpan USB Twin Mac Serial Adapter
–   Genelec 5.1 monitoring

Apple Final Cut Pro, Logic Studio and AJA Peripherals

For independent production houses that are operating on a smaller scale, or perhaps those working exclusively with their own content, a full Digidesign and Avid post-production suite will be beyond both their needs and budget. But just because you don’t need the mind-boggling capabilities of Pro Tools|HD, ICON control surfaces and Avid video peripherals doesn’t mean that you want to compromise on quality.

Over the last 5 years, Apple have developed an impressive roster of Pro Applications that have gained widespread approval in both amateur and professional environments. With a feature-set aimed at the most discerning professional and a price tag aimed at the amateur producer, both Logic Studio and Final Cut Studio have become accepted – and sometimes preferred – alternatives to Digidesign and Avid’s software solutions.

For professionals working with video or indeed post-production audio, Final Cut Studio 2 will need no introduction. The suite has found popularity not only amongst small-market independents, but also in the largest media conglomerates. Much of this success can be attributed to the fact that the Final Cut suite handles key workflow tasks such as collaboration with other users, management of media files and seamless multi-point delivery with ease. However, the strength of the Apple Pro Applications predominantly lies in their integration with both Apple’s own hardware and third-party solutions. As you would expect, the applications are designed to work seamlessly with each other on Apple’s own Mac OS X operating system, but are also able to operate with third-party controllers, decks and audio devices. All of this means that you can customise a system that isn’t bound by manufacturer-specific hardware – the hardware you use for video and audio is your choice and not fixed to limited proprietary hardware as it is with Digidesign and Avid peripherals.

With that said, there are certain standards that I would personally recommend if you’re choosing to work with Apple Final Cut Studio for audio post-production, and this is where the impressive “Power Trio” setup comes to the fore. The Power Trio components, although not specifically catering to the video market, provide a level of quality and integration for audio post-production that it would be difficult, if not impossible, to find elsewhere. This system is composed of Apple hardware and software, Apogee converters and I/O, and Euphonix control surfaces.

As we’ve already seen, an audio post-production suite is largely at the mercy of the components that control it – when editing decisions need to be made quickly and confidently, the control of the software is as important as the software itself. Knowing this, Apple, Euphonix and Apogee have worked closely together to provide a seamless workflow that lends itself perfectly to audio post-production.

For audio, Apogee systems range from the small and portable Apogee Duet to the fantastic sonic quality of the Apogee Symphony system, which can provide between 16 and 64 channels of pristine analogue-to-digital or digital-to-analogue conversion for audio input (for post-production recording of Foley, ADR, etc.) and monitoring in surround. Combine this with Euphonix control of Apple’s Final Cut and Logic Studio suites (using the proprietary EuCon control developed specifically for this purpose) and you have an audio suite with class-leading audio performance and integration.

Apple Logic Studio 8 with Apogee AD-16X and DA-16X converters and Euphonix Artist Series MC Control and MC Mix surfaces

Of course, in many independent post houses and single-seat suites, the role of the system may not just be audio post alone and video peripherals will be required for capture. For professional work on an Apple system, AJA’s internal and external units offer unparalleled performance. AJA’s KONA cards represent some of the finest uncompressed QuickTime I/O cards available for the Apple platform with the added benefit of bringing all the quality and function of a fully-featured non-linear editing suite at a fraction of the price. AJA KONA cards are available in both PCIe and PCI/PCI-X versions, so they can be fitted to both Mac Pros or older G5 desktops. The great thing about cards such as the KONA3, AJA’s flagship PCIe-based product, is that 8 channels of audio output are available on 24-bit AES/EBU connections, meaning you can mix your soundtrack in full 7.1 when the card is used in conjunction with a Digital to Analogue Converter (DAC). Adding an Analogue to Digital Converter (ADC) to the system will also allow input of audio in perfect sync with the video playback for tasks such as ADR recording.

For users who operate on mobile systems such as the Apple MacBook Pro, AJA also offer a stand-alone portable solution in the IoHD, which works over a single FireWire 800 connection. This award-winning product allows producers and editors to work on the road in real-time with 720 and 1080 HD, all in full-raster 10 bit and 4:2:2 and provides both analogue audio inputs and outputs on XLR (4-channels of each) and digital AES/EBU audio inputs and outputs on four BNC connections. To put it simply, monitoring your audio can be as simple or as complex as you need it to be, allowing for the perfect mix of functionality and practicality.

If you have any queries about the products mentioned in this article, get in touch with us on 03332 409 306, email audio@Jigsaw24.com or take a look at our full media and entertainment range.

Sound for picture IV: The audio post-production process

Sound for picture IV: The audio post-production process

Contrary to popular belief, audio post-production should be a consideration long before the first scenes of a film are even shot. That is, choosing the best Production Sound Mixer you can afford within your budget will save you a small fortune in post-production once the locked cut has been produced. The Production Sound Mixer will head up the production mix team, who will be responsible for recording the live dialogue and will be your most important ally at this stage in the film’s production. The Production Sound Mixer will fulfil a variety of duties during the shoot and will be responsible for choosing the correct microphones for any given take, operating the sound recorder, maintaining the sound report, notifying the director of any sound-related problems, ensuring professional-quality recordings, watching for boom shadows, determining sound perspective and recording “room tone”, or ambient sound, with the sole objective of providing a clean, intelligible, professional sound track.

While the role of the Sound Production Mixer has remained largely unchanged as technology has developed, the delivery methods for audio recorded on location have advanced considerably. Before digital recording became as accessible as it is now, production sound reels would be sent to an audio post house every day for transfer into “dailies”, where that day’s select film takes would be synced with the audio track that corresponded to them.

More recently, productions are typically edited using digital production systems such as Apple’s Final Cut Pro or Avid’s Media Composer software and, as such, procedures have changed somewhat to accommodate this movement to a new technology. For example, it’s not unusual to find a video post house involved during shooting to Telecine selected takes if the production is being shot on film, allowing producers to use video production equipment to complete their film projects (the process at its most basic simply allows captured content to be viewed with standard video equipment or computers). In addition, dailies will be synced from the production medium – DAT, for example – onto some form of videotape for later digitising. By logging time code correctly from the production source recordings at this stage, the Edit Decision List (EDL) can accurately reflect the time code that was shot with the respective picture and allow automated reloading of production dailies.

Of course, if your production is being shot digitally, production audio will be loaded directly into a non-linear editing system (NLE) such as Final Cut Pro, Avid, Premiere and various others. Rather than a real-time transfer, this process is simply a matter of copying audio files (typically stereo WAV format) from a recording drive or DVD provided by the Production Sound Mixer to the NLE, where they can then be synced by the editing team.

In many cases, production sound is just one or two tracks, although many more tracks become available once working within the NLE. While this allows additional sound and mixing to occur in the NLE, many professional producers will opt to work with a dedicated audio production system such as Pro Tools|HD. The decision to utilise a dedicated Digital Audio Workstation (DAW) will largely be determined by the type of project – documentaries and less-complicated projects are typically prepared in the NLE by the picture editor, while narrative features, movies and projects requiring more advanced dialogue, music and effects work will be edited separately in a DAW such as Pro Tools.

Once the editor has synced the production audio in the NLE and the formal editing process has been carried out, the first full assembly of the picture is almost complete. Although small edits may happen here or there, the film is essentially arranged in its final format and is known as a ‘locked cut’. It is at this stage that the bulk of audio post-production begins – the film can be spotted for Foley effects and music, dialogue problems can be identified so that an ADR cue can be recorded, and the need for special effects can be determined.

It may be that your production audio, synced to picture within the NLE, contains several tracks of audio already – this can range from multiple tracks of dialogue arranged by the type and location of microphone used to ambient sound and spot mic-ing audio. Having looked at microphone types and applications in previous articles, it’s easy to see how an editor can quickly run out of audio tracks in the NLE. At this stage, many editors will opt to transfer their project to a dedicated audio environment so that it can be perfected by an audio post-production specialist.

The easiest way to transfer audio between applications is by means of an OMF export (picture and effects are not included in the export). This is the simplest, fastest and most efficient means of getting your audio into a DAW and ensures that your audio tracks are transferred accurately between the NLE and DAW. Unless you’re using Final Cut Pro or Avid systems for your picture editing, you’ll need to make sure that OMF Export is an inclusive feature of your NLE. It’s also worth noting that whilst Logic Pro supports OMF export straight out of the box, Pro Tools will require you to purchase an additional piece of software called DigiTranslator. As far as OMF Export options go, the following considerations should be made:

–   If you are able to select the type of OMF export, choose OMF Type 2.
–   Exports should be embedded so that the audio gets rendered into one large file.
–   The sample rate of your OMF export should match the sample rate of the NLE. This will typically be 48kHz.
–   When specifying handles for audio, values in the range of 300 frames should help to smooth everything out in the mix.
–   Remember that any effects (unless bounced down to the audio file) won’t be included in the export, so it’s best to leave this sort of processing to the DAW.

Since transferring OMF files to an audio post-production engineer is simply a matter of providing the resulting export file on a hard drive or DVD, post-production can happen at any location that meets the requirements of the project. Work can be carried out in a Pro Tools|HD facility for sample-accurate editing, ADR and Foley work, or at a Logic Pro facility, for example, if more creative soundscapes and soundtracks are required. It is during this session that premixing of audio recorded on location, in post-production, and material delivered by composers will take place. Depending on the project, premixing can take hours, days, weeks or even months for films intended for the big screen. Only once the audio elements are equalised and balanced in volume will effects such as reverb be added and the final output format be set – whether this is mono, stereo, 5.1 or 7.1. On large productions, various mixing tasks can be handled by a number of mixing engineers; for example, the lead mixer will handle dialogue and ADR, an effects mixer will take care of sound effects and Foley, and a third mixer will oversee music.

After the final mix has been approved, the resulting audio will be mixed with the correct number of channels, but it will not yet exist on the finished master tape or print. It is at this stage that the majority of projects will require a final step called Printmastering that combines several stems (or groups of submixes such as dialogue, effects, etc.) into a final composite soundtrack. This composite soundtrack is used to create an optical or digital sound track for feature film release print. For most television applications, a printmaster is typically not required and instead there must be a “layback” where the sound is recorded onto master tape.

At this stage, it is also common for a Music and Effects (M&E) track to be created. The M&E track includes all of the audio with the exception of any dialogue in the English language, so that foreign language versions of the project can be dubbed at a later date. In instances where English dialogue is recorded as part of the same audio waveform as ambient sounds, further Foley recording may be required to bring these sounds back into the mix.

Want to know more? Call 03332 409 306, email broadcast@Jigsaw24.com or take a look at our full broadcast range.

Sound for Picture II: Transparent Audio Matters

Sound for Picture II: Transparent Audio Matters

Ten or fifteen years ago, producing sound for picture was a significantly simpler process than it is now – either you had tools for the job and knew how to use them or you didn’t, in which case you would hire a location sound team and ship the final product off to a post-production house.

In that respect things haven’t changed all that much, but as professional video equipment becomes available to users of all levels and at every budget, we find that there are many more people producing video today who may not have experience in location sound recording and audio post-production.

The rise of digital video over the last few years means that the vast majority of users have access to professional equipment that can capture high-quality video at relatively affordable prices. The problem is that many independent projects suffer from recorded sound that is average at best and barely useable at worst, largely because budgets are spent on equipment that gives immediately obvious improvements – a better camera, lens and lighting will all give an instant improvement in the viewed image. A scene that is well-lit in a higher-quality format yields results that are easily justifiable when it comes to loosening the purse strings. Audio, on the other hand, proves to be a bit of a paradox; the better quality it is, the less noticeable it will be. And who wants to spend money on something that becomes less and less noticeable with every penny spent?

It’s important to note that no matter what the project, your audience will expect the audio to be completely transparent. If the audio is a point of discussion in the production, chances are that something isn’t quite right (unless, of course, the discussion is because it’s been done extremely well). The audio will convey almost all of the emotional impact in any given scene of your video. Try watching your favourite scene with the sound on mute and it will become obvious how much emotion and atmosphere is a direct result of audio – visual images without sound simply aren’t that moving. That’s not to say that silent films can’t be sad, happy, scary or dramatic, but they were, after all, conceived to be performed either without sound or with live sound. There may even be scenes you can recall where the distinct lack of audio heightened the impact of a scene, but that effect is almost certainly the result of its distinction from every other scene as a result of the absence of sound.

So should a part of your budget be dedicated to something that becomes less obvious with every penny spent? Absolutely! Sound works primarily on a subconscious level when it’s presented alongside a visual medium. Great sound will only enhance your project and, truly, it will be the difference between an amateur and professional production.

The most important thing to mention is that poor quality recordings will always be of a poor quality. Certainly there are techniques and tools to remove unwanted background noise such as ceiling fans or hum and hiss, but none of them leave the audio fully intact as they generally operate by removing a certain frequency from the waveform. If this frequency is outside of the range you want to keep, e.g. not in the same frequency range as dialogue, you might get away with noise reduction tools, but in the majority of cases audio cleaning will leave unwanted artefacts and will simply add another type of noise to your soundtrack.

When scouting for a recording location, there are several things we need to take into account, all the while remembering that preventative measures are far better than corrective ones during the post-production stage. Excessive ambient noise from traffic, groups of people and animals, or building noise from heating and air conditioning systems, computers and machinery can all be minimised to some extent.

Want to know more? Call us on 03332 409 306 or email broadcast@Jigsaw24.com.

Bringing Rendering In-house: The Basic Options

Bringing Rendering In-house: The Basic Options

The most obvious option is to cut out the middle man and build your own dedicated render farm. However, if your performance requirements don’t trump any misgivings you may have about the cost of a dedicated farm, it’s not the only option you have…

A number of render management applications don’t necessarily require dedicated hardware for their render nodes. When a rendering job is submitted these applications can also allow you to utilise the power of your existing workstations and servers, particularly when they’re not busy. This is a great compromise.

Distributed rendering on the computers you already have also helps counter a group of related arguments that are often raised against having an in-house render farm (other than the initial cost) – that a typical office, which is home to, say, 30 workstations, is busy enough already. Or perhaps the server rack is close to being fully populated, so there’s nowhere to put a slew of new hardware.

A great roadmap towards a dedicated render farm


Of course, there is nothing to stop you from combining both options. A distributed render farm, based on the computer resources you already have, is an excellent way to start to enjoy the benefits of in-house rendering – the speed, flexibility and cost savings that you’ll accrue – and as time passes you can begin to invest in dedicated render nodes, adding them as you need them, or as the budget becomes available. Who knows? You could even end up generating income by offering rendering services yourself!

In other words, there is nothing to stop you from gradually creating a hybrid system for rendering. Later, you can even invest the money that you’ve saved by rendering in-house in dedicated rendering hardware.

Adding dedicated rendering hardware makes your facilities ever more efficient and quick to turn work around, all of which clearly helps give you a more competitive commercial and creative edge.

Render farm management software

So, there’s software available that manages not only the jobs being submitted but also the servers and workstations on which they’re being rendered.

Client software can be installed on any workstation to make it act as a render node. It gets better though – more advanced render management software, such as Qube!, can even schedule the times when a particular workstation is to act as a render-node.

The productivity of your artists and designers is never compromised.

The benefits of in-house rendering


  • The rendering process is fully integrated into your workflow, making life easier.
  • The time from submission of the job to its completed return should be much shorter, giving you an edge.
  • You can be much more flexible about submitting and changing jobs.
  • The initial investment needn’t break the bank.
  • It’s easy to scale rendering performance upwards as your needs grow.

There is a clear roadmap all the way up to a dedicated rendering resource without having to discard any hardware or migrate to a new rendering method.

Call our team on 03332 409 309. Email us at 3D@Jigsaw24.com. Visit us at Jigsaw24.com.

Bringing Rendering In-House: Further Expansion – a Dedicated Resource

Bringing Rendering In-House: Further Expansion – a Dedicated Resource

As we have already said, many people exploit “quiet time” on their workstations for rendering; they submit frames for rendering either out-of-hours or opportunistically, and this is a great, cost-effective option. However, there are other factors, such as rendering performance, which mean a dedicated render farm may work best:

  • Artists are more creative – When scheduled rendering is not an option (maybe because of tight deadlines and the need to keep the creative process moving) a dedicated render farm won’t draw on the workstations’ processing power or RAM, leaving applications snappy and responsive.
  • Extreme rendering performance – A dedicated render farm is optimised for fast rendering, so the completed job is returned as quickly as possible.
  • Accommodate new projects with ease – The rack-mounted hardware used in dedicated render farms is fully scalable, so if you start a new project you can “scale out” your render farm by adding new render nodes. In an emergency you can also pull in some workstations as additional nodes.
  • Low profile but maximum processor density – There are hardware options that offer thousands of cores per rack, meaning only around two feet by four feet of expensive floor space for a powerful render farm.
  • Protected by server room facilities – Most dedicated render farms can be located alongside other computing facilities and tend to enjoy the protection and security of uninterrupted power, cooling, industrial grade power feed, and restricted physical access. In other words, maximum uptime.
  • Workstations – Regardless of which platform you’ve chosen, whether it be Boxx workstations or Mac Pros, and whether they’re optimised for professional 3D animation or games development and visualisation, a render farm can be assembled to match.
  • Ethernet switches – Connecting the render farm to your workstations, we can recommend options for high performance or close integration with your existing network infrastructure. Gigabit Ethernet is the most common choice today, although 10 Gigabit is available. Leading brands include Cisco, Juniper, HP ProCurve and 3COM.
  • Render nodes – Using rack mounted blade servers it’s straight forward to build a hugely powerful farm. Great choices here include the Boxx RenderBOXX 10200 system or a bespoke HP C-class Blade system.
  • Storage networking – It’s important to identify the right technology for shared back-end storage. The best option for you will depend on your exact workflow. These options include iSCSI, Fibre Channel, FCoE, InfiniBand, and simple NAS over Ethernet. Each has its specific pros and cons that we can cover.
  • Storage – There are some fantastic options for storage. For example, the Isilon IQ series scales performance right up to 20Gb/sec with 3.45TB of shared storage. Even if the numbers are not your thing, the simplicity of management is compelling. For example, more storage can be added, as you need it, without downtime.

The table below shows some examples of the amount of storage that you may require:

10 minute long footage (14400 frames total, 3 passes) = 43200 images

Resolution       Open Exr File size     Total size of project files

1920 x 1080     30MB (per image)       1296000MB
1280 x 720       14MB (per image)

604800MB
640 x 480          5MB (per image)         216000MB

rendering workflow diagram

Points for Diagram:

A)  An artist clicks the render option in his 3D application and is immediately free to continue with his design work. The job passes to the Supervisor node of the render farm.

B)  The supervisor node breaks the animated sequence into frames and allocates them to specific render nodes.

C)  The render nodes pull files they require from high-speed shared storage and process the frames.

D)  Now complete, the rendered sequence is pulled together and made available to the artist.

Call our team on 03332 409 309. Email us at 3D@Jigsaw24.com. Visit us at Jigsaw24.com.

RS control standards explained

RS control standards explained

What are they? What do they do? What are they for? All three are forms of electronic equipment control communications. They are usually connected via 9 pin “D” connectors although RS-232 can also be found on 3 pin audio type mini jack connectors. Note, any form of electronic path needs 2 wires to complete a circuit.

RS-232

This is, in its basic form, a 3 wire system. The signal wires are Transmit (TX) Receive (RX) and ground (screen, earth or common are other terms used).

The ground is the common return path for both TX and RX. There are other signals that can be connected on the “D” connectors. Unfortunately there seems to be as many different RS-232 standards as there are manufacturers of equipment!

This is an unbalanced system and maximum cable length allowed varies but is no more than 50 ft. It is the most common of the three standards.

RS-422

This is a balanced 5 wire system version of RS-232. The signal wires are TX+, TX-, RX+, RX-. The ground connection plays no part in the signal path. The maximum cable length is 4000 ft.

This system is widely used in broadcasting to connect VCRs, vision mixers, non linear editing systems, transmission management systems etc.

RS-485

This is a modified version of RS-422 and allows multi-point control of equipment with the cables looping in and out of each item of equipment. A unique control code is allotted to each item of equipment. This system reduces cable costs.

To find out more, get in touch with us on 03332 409 306 or email broadcast@Jigsaw24.com.

An industry view: The true benefits of Revit Architecture

An industry view: The true benefits of Revit Architecture

Introducing Autodesk Revit to your workflow might seem like a lot of hard work, but after the initial learning period you’ll wonder how you ever managed without it; its intuitive approach to building design will improve your efficiency and turnover, and you’ll get the results you want with far less effort. Here, we look at a couple of the key benefits.

Scheduling

Revit allows the collation of building objects and entities within a model (such as doors, walls, and windows) to be dynamic, instantly updated and intelligently managed. Creating schedules of objects, materials and areas is one of most time-consuming and painful processes during tendering and construction. It also leaves a large margin for error and means any changes that are required take a long time and often result in starting work again!

In Revit, all elements hold editable physical properties such as materials, dimensions, internal/external locations, etc. This is what sets Revit apart from other CAD programs; because the schedule is linked to geometric model objects, you can use it to locate and change object types and properties. It doesn’t matter in which view you change or add an object; it is automatically updated in all views, allowing you more time to do what you do best – designing!

When you create a new schedule, you can select and format a number of varied options; this lets you organise, filter and define the data to display within the schedule. The schedule is then instantly created in a clearly formatted spreadsheet, including text and numerical values. The image below shows an example of a door schedule in a project. As you can see from the two views, when a door type is selected in the schedule it is highlighted in red on the plan. This is helpful when you have a large project and it is easy to lose a door’s location!

industry view 1

Drawing/sheet set-ups

The fantastic thing when you work in Revit is that some of your views are being created as a by-product of the design itself. For example, when drawing in plan view your elevations are parametrically created at the same time to reflect exactly what is being drawn. This includes all windows, doors and elements inserted. This saves a lot of time in contrast to traditional CAD methods, where elevations will need to be created from scratch and transferred from the plan views.

The same can be said for section views. By simply using the section tool you can select the location, orientation and extents of a section view. Revit will automatically process all objects that are cut through and all objects that may be seen within the view, ensuring nothing is missed (in contrast to traditional CAD methods). This is incredibly powerful, particularly when working within tight timeframes and with demanding design teams/sub-contractors. In real terms the benefits can be seen most clearly when working with, for example, a window manufacturer; he may require a section through a window that isn’t covered by your existing sheet sets. By using Revit’s section tool, you can create, publish and share this section within 10 minutes, whereas with traditional 2D CAD this could take up to half a day!

industry view 2

Call the CAD team on 03332 409 306 or email CAD@jigsaw24.com with any related questions – we’re always happy to advise.

Are You Using OpenEXR for Your Rendered frames?

Are You Using OpenEXR for Your Rendered frames?

Despite the hours that are often spent rendering out frames of 3D animation, many 3D content creators and visualisers are still unsure of the best options when it comes to an outputted file format.

Native Video Formats?

When rendering out large volumes of 3D data I would always advise people to steer clear of animating out to native video files such as .avi or .mpeg. Although valid formats, trouble occurs when you’ve already waited 10 hours for a sequence to render and on 98% completion your system crashes and you have to start the render all over again!

Still Picture Images

The safer approach to adopt is to render out each frame as a “still picture” image. Take the scenario from above: say the system crashes at 98%. If rendering out individual frames then there is no need to start the render from the beginning again. Instead, all that needs to be done is to re-render from frames say, 990 to 1000, leaving you without the hassle of an incomplete or corrupt video file.

This then begs the question of “What image file format should I be using?” There are numerous options here, the first being the good old .jpg. There are many advantages and uses for JPEG images but as far as I’m concerned, 3D is not one of them! Not when you’re producing an animated sequence anyway! The problem with JPEGs is the loss of colour clarity during compression. Now, I hear what you’re saying: “My 3D application allows me to turn off .jpg compression!” Well that might be the case, but does it really eliminate all image compression when writing the file? I don’t think so – I certainly haven’t ever found a piece of software that successfully does this anyway!

Your next route is either TARGA or TIFF, again other popular formats. Although TARGA only offer 8 bits per channel – nowhere near enough for the type of colour quality that is being demanded today – and despite the high-resolution nature of TIFF files, there is a different byte order between Mac and PC. Creating colour conflicts in post-processing on different systems.

Recommended Image File 

A solid choice amongst artists has been the use of 24 bit PNGs. Providing good solid colour depth without any loss as well as being able to store alpha channels directly in the file, PNGs make great files for compositing. As popular as PNGs are there is another file format that really should be used as your standard for all final renders; the OpenEXR.

Created by Industrial Light and Magic (the guys behind Narnia and Pirates of the Caribbean) and around since 2003, OpenEXR is a 32 bit per channel file format with unlimited channels. It’s a high dynamic range imaging file format and is available to use under a free software licence. Key to OpenEXR’s success is its 32 bit per channel; essentially the greater the bits the more colours you can have in your image and the better your image will look.

Increased colour ranges make for an easier job in post-production or when colour grading, with simple yet accurate correction of variables such as exposure, hue / saturation or brightness / contrast. Again, because of increased colour information, OpenEXR permit more channels than your standard Red, Green, Blue and Alphas. In fact you have the choice of any number of channels, opening up the possibilities of rendering out things like diffuse, specula or shadows, for example, to their own buffers within one HDR image. These options give greater flexibility and increased control.

The Verdict

In short, for flexibility and colour clarity OpenEXR really should be your file format of choice when it come to final renders, with the option to use PNG as a back up when compatibility or size is an issue.

find out more by giving us a call on 03332 409 306 or emailing 3D@Jigsaw24.com. For the latest news, follow @Jigsaw24Video on Twitter or Like’ our Facebook page.

Creating stereoscopic images in 3ds Max

Creating stereoscopic images in 3ds Max

Stereoscopic images have been around for years now and are an ever-popular aspect of visualisation and film, featuring in the recent box-office hit Beowulf.

Stereoscopic images are used to create 3D images that give the illusion of depth.

They work by filming the same point of focus from two points, two inches apart. Using traditional cinematography it can be really tricky to set up two cameras focused  on exactly the same point. However it can be done very simply in 3D applications such as 3ds Max 2008 and then imported into any scene.

We’ve come up with a quick workflow that illustrates how to set up cameras and helpers and add them to your scene to create stunning stereoscopic animations.

PLEASE NOTE:

This walkthrough will presume that you have an understanding of how to create basic objects, move and rotate them, and also how to navigate around the Create and Modify tabs in 3ds Max 2008+.

Firstly, we need to set up the correct unit scheme for our blank scene. To do this, select Customize->Unit Setup from the menu and set this to US Standard, Fractional Inches. It is easier to set this up now so when you place the cameras  they will be exactly 2 inches apart – you can always change back to your preferred unit setup.

positioning cameras in 3ds Max

The next step is to place our first target camera into the scene. For now it doesn’t matter where the target is pointing as we’re going to add helpers to control the camera later. Once the camera is in, select the Move tool and set the co-ordinates of the camera to 0,0,0. Then select the target and set the X to 60 and Y/Z to 0.

Select the camera again. This time we’re going to change the Y co-ordinate to 1. Now make a clone of that camera by pressing the keyboard shortcut CTRL+V which will give you a dialogue box asking you if you would like to create a Copy, Instance or Reference. In this case we want a copy. Then click ok. As we already have the new camera selected, change the Y co-ordinate to -1. You have now created two cameras that are 2 inches apart from each other.

Creating a stereo rig in 3ds Max

We’re now going to add the helper objects that will allow us to move and control the camera/target. This will make your life easier when trying to set up the camera view in your scenes.

What we want is to set up an object from which we can control the camera completely, while also keeping the cameras’ focus on the same point.

The best way to do this is to create a 3D spline that surrounds the cameras, which is easy to grab and manoeuvre.

Firstly, let’s draw a Circle Spline on the scene with a radius of 3 inches, and set the co-ordinates to 0,0,0 so that it sits around the two cameras.

circle splines in 3ds Max

Next, create an Instance of the spline by using the keyboard shortcut CTRL+V, and rotate it 90 degrees on the X-axis.

Repeat this process till you have made circles with the following co-ordinates:

1. 0,0,0
2. 90,0,0
3. 0,90,0
4. 90,0,45
5. 90,0,-45

Now that we have our circles, convert one of them to an editable spline (right click one of the circles, and select Convert to Editable Spline) and from the Modifier tab select Attach Mult to attach all the splines together.

attaching splines in 3ds Max

At this point, I would recommend that you change the colour of the spline to blue, purely to have some consistency with the 3ds Max colour scheme, as blue is associated with cameras.

changing colour schemes in 3ds max

Next we need to link both the cameras to this control object. Select both the cameras either by holding CTRL and clicking on them, or by using the keyboard shortcut H to bring up the Scene Selection window.

The problem with this is that if we move the camera around, the target stays locked in its place, which means the angle of the cameras will not generate the correct image – the target needs to be directly in front of the two cameras. This can easily be solved by adding a helper object.

From the panels on the right-hand side, select the Helpers tab and drop in a Point helper. Again, change the colour to blue.

using helper tools in 3ds max

Use the Align tool to centre the helper into the camera targets and, using the same method as before, link the two targets to the helper.

linking targets in 3ds max

You can now quickly check that when you move the helper both the targets move, and also that if you move the camera helper, the cameras move. Link the point helper to the control object and we’re done!

Part 2 coming soon…
We will add this camera rig to your own 3D scene and show you how to composite the images for your final render….

For further tips and advice call the 3D team on 03332 409 309 or email 3D@jigsaw24.com. Visit our website Jigsaw24.com