- Picture Instruments Chroma Mask 2 0 100000
- Picture Instruments Chroma Mask 2 0 1000 Mg
- Picture Instruments Chroma Mask 2 0 1000 Lumens
Picture Instruments Mask Integrator 2.0.10
Mask Integrator masks and crops subjects automatically during a photo shot. Two shots will be necessary for every subject: the regularly lit subject and a backlight version of it.
Made in the USA UV Fashion Masks Unisex Anti Dust Face Mouth Mask, Washable, Reusable – Protection from Dust, Pollen, Pet Dander, Other Airborne Irritants (Safety Green) 3.3 out of 5 stars 49 $6.99. Picture.Instruments.Chroma.2.0.10.rar 60,02 Mb. Choose a download type Download time. 30 minute(s) 15 second(s) 20 second(s) Download restriction. A file every 60. گرافیک / دانلود Picture Instruments Chroma Mask 2.0.10 – نرم افزار عکاسی با پس زمینه سبز 24 اردیبهشت 1397 گرافیک » ویرایشگر تصویر.
The following photographic setups have been useful for switching between lighting situations in the past:
- Freemask or Freemask 1000 by Hensel
- Other systems which are able to switch between two flash or lighting groups (regular lighting and backlighting)
- Manual switch between the two flash or lighting groups (please note that this is only applicable for static objects)
Overview of the most important features in Mask Integrator:
- Automated detection and loading of new images if the camera directly loads them into the computer
- The mask can be optimized by adjusting the black and white point
- Automatic removal of shading elements at the edge of the image
- Mask correction brushes
- Preview of the masked image in front of a white, black, transparent or any other background
- Scaling and positioning of the mask in front of a background image
- Corrections in position, scale and adjustments of the mask can be applied automatically to new images upon opening them
- Color correction and creative looks through 3D LUTs
- Fine tuning between image and mask (especially for Freemask and Freemask 1000)
- Save a masked image or a folder full of masked images with one simple click of a button.
- Very fast masking of even the most complex subjects
- Consistent product photos in front of pure white
- Shooting a subject for another background (compositing)
- Motto Shoots
- Product photographers
- People photographers
- Online-Shops
- All users of Hensel Freemask, Freemask 1000 or similar systems
Screenshots:
- Title: Mask Integrator 2.0.10
- Developer: Picture Instruments
- Compatibility: OS X 10.10 or later, 64-bit processor
- Language: English
- Includes: K'ed by TNT
- Size: 21.5 MB
- visit official website
NitroFlare:
Before I dive into the topic of alpha channel masking in ffmpeg, I shouldexplain what this document is. I'm writing it as a blog post however,I want it to document what I know about how alpha masking is accomplished inFFMPEG.
Warning: Video examples in this post use VP9 with an Alpha channel. At thetime of writing, only Firefox and Chrome based browsers will support thecontent. Edge users (as of April 12, 2020) will see the video but willnot see the alpha channel. Safari users will need to convert the videos toH.265 somehow to get the equivalent effect.
Basics of Alpha Channel
When we want something that we are filming or recording to be transparent, thetechnology I think we use most often is a thing called chroma keying. Youwill probably have seen stages or sets with large swathes of green or bluefabric draped all over the place dyed to a specific colour such that computerscan be programmed to recognize that colour in video and automagically strip itfrom the recording. This is what lets visual effects guys make Superman fly,or X-Wings zip through a Death Star trench run. Since we cannot literally makethe world transparent, the chroma keying is a clever hack to pretend that itis.
Digital media is different though, since pretend pixels don't have any basisin reality we can just specify certain areas of images as transparent withoutworrying about matching colours to an arbitrary 'this should be invisible'palette.
The degree to how much any given pixel should be transparent is indicated byit's alpha channel. Typically, in most digital imagery we use an 8-bit alphachannel (256 shades of transparent). Alpha blending is one of the mostsimple and widely used mechanisms for compositing digital imagery, and itdescribes the process of combining translucent foreground images with opaquebackground images.
Support for alpha channels in many media codecs is limited because ofhistorical implementation decisions. For instance, the ubiquitous jpeg formathas no internal support for alpha transparency whatsoever, nor does h.264(often called mp4 which is actually just a cardbox box you can shovemovies into). Some digital tools will implement chroma keying instead of alphablending because of technical limitations of these common data codecs.
Many codecs don't include alpha channel support since the imagerythe designers envisioned encoding with those codecs doesn't have anytransparency in them. For instance, consider any digital camera, since thereal world that the camera is recording is not transparent, the format thatthe camera will save its image too has no need to support an alpha channellayer. Whichever photons arrive at the camera are the ones that colour andshade the image.
In digital video, the modern codecs that I know support an alpha channel areVP9 from Google, Prores 4444 and h265 from Apple (and others). Support forthese codecs is rather mixed when I am writing these words. Moreoverthe Prores variant is not intended for digital distribution, it's a formatintended for storing high quality intermediary or raw files for editing.
Clear so far? Maybe, but this discussion of codecs isn't done yet. We havecompatibility issues to sort out still.
Digital Cardbox Boxes
It is time to talk about containers. I used to get very confused why somevideos I would download from the internet that had a familiar file extensionwould play on my computer and some would not, despite sharing the samefamiliar file extension.
I originally encountered this with the AVI file extension, some videos wouldplay and others would complain about something to do with codecs? Well itturns out that videos stored as AVI aren't actually a fixed format, rather itwas a storage container where people could put a variety of audio or videocontent encoded with different technologies. My computer simply did not havethe software installed to understand all the various types of encoded videothat it was encountering. So despite the fact my computer could open andunderstand an AVI file, it did not understand the particular language withwhich the contents were written.
This problem is even more complicated today than it was in 1992 when I firststarted having these issues.
There are LOTS of different container formats these days (.mov, .mp4, .mkv,.webm, .flv and many others). Most containers can store a massive variety ofdifferent video and audio formats. For instance the prolific h.264 format,can easily be stored in most of the containers listed above. Software supportfor all the various containers is extremely mixed. Even worse is the supportfor the actual formats themselves.
Alpha channel support in video codecs
Ok so lets just imagine you are a mere mortal and want to create an animatedvideo clip with an alpha channel, what are you options today (April, 2020)?
I think it's correct to say that there are at least 3 codec categories:
- Capture
- Intermediate
- Distribution
A capture format would be like the raw video from a digital camera or ananimation project format in something like blender or other animation suite.Intermediate formats (like ProRes) are great for editing video clips becausetheir internal storage layout allows tools to quickly decompress and scrubthrough their contents without causing extra load on the computer at thecost of increased storage requirements. Distribution codecs (like h.264,vp8 and vp9), do a great job at compressing the video stream, at the costof requiring extra computation time to render each individual frame.
Intermediate codecs with alpha channel support
Format | ffmpeg | Resolve | FinalCut | Premier |
---|---|---|---|---|
ProRes4444 | Full | Partial | Full | Full |
Cineform | Decode | Full | ? | ? |
DNxHR | No Alpha | Full | ? | ? |
QTRLE | Full | None | Full | ? |
TIFF | Full | Full | Full | Full |
Warning: Photolemur 2 2 1 – automated photo enhancements. ProRes4444 export from Davinci Resolve isonly supported on linux and Mac platforms (at timeof writing, April 2020).
ProRes4444 is probably the most widely supported intermediate stage codec, butsince I cannot export ProRes4444 files from my Windows Davinci Resolve suite,using it requires some extra effort.
Actually, and this really surprised me, the best supported cross-platform wayof storing intermediate video files for animation is to actually not use avideo format at all and instead just save a large collection of compressedTIFF images for each rendered frame and just pretend it's a video like you mayhave done with children's animated flip books as a youth.
Distribution codecs with alpha channel support
Format | ffmpeg | Resolve |
---|---|---|
HEVC H.265 | No Alpha | Decode |
VP8 | Full | None |
VP9 | Full | None |
H.265 is the upcoming replacement to H.264 and does support an alpha channelin its specification but toolchain support for that technology is reallyonly available on Apple platforms when I am writing these words.
Warning: H.264 does NOT support alpha channel and is not included here.
Note: ffmpeg does actually have an h.265 encoder via a third party librarybut now it does not support alpha channel to my knowledge, though Iwould be happy to be proven wrong about this.
Currently there is no widely available distribution type codec that supportsan alpha channel that works across all devices but VP9 does have a massiveadoption base from all recent versions of Chrome and Firefox. Presumably,since Edge already supports VP9 it will add alpha channel support at some point in the future, but today it does not support VP9 alpha blending.
Codec Choices Summary
As best I can tell from experimentation and researching throughout the publicInternet, ProRes4444 has the most toolchain support across common platforms.However, using sequences of compressed TIFF images is completely supportedin every editing tool and is probably the best way to store intermediate videocontent for later editing if maintaining an Alpha Channel is important
For archiving and web purposes VP9 can be used in every modern version ofFirefox and Chrome, and has official (but not bundled by default) support fromMicrosoft. Apple users can use H.265 but toolchain support for creating thosevideos with Alpha Channel is very thin now.
Practical Examples of Alpha Channel Video
So far we've identified a few codecs and technologies we can use to keep trackof transparency in videos. Lets take a look at how the ffmpeg tool is usedpractically to create animated video content with an alpha channel. Then wewill dive into animating and masking existing videos with ffmpeg.
Picture Instruments Chroma Mask 2 0 100000
Using FFMPEG to Encode VP9 with Alpha Channel
Ok lets just assume you have already done the work to create an animationsequence of TIFF images that you would like to transform into a VP9 file fordistribution onto the web.
The common format for storing VP9 video is in the webm container and you caninstruct ffmpeg to create a rough video with instructions like the following:
Name | Required | Description |
---|---|---|
i | Required | Input file(s), %04d is a 4 digit pattern |
r | Optional | Frame Rate, default is 25 if unspecified |
c:v | Required | Codec, VP9 here supports Alpha channel |
pix_fmt | Required | yuva420p is required to support alpha channel |
You could choose to run 2 passes on the source material to create a slightlyhigher quality final product like this (on Windows):
Using FFMPEG to Encode ProRes4444 with Alpha Channel
Since ProRes enjoys wide support across most video toolchains, here is acommand to create a ProRes4444 video clip from a sequence of TIFF images withalpha channel enabled.
Name | Required | Description |
---|---|---|
i | Required | Input file(s), %04d is a 4 digit pattern |
r | Optional | Frame Rate, default is 25 if unspecified |
c:v | Required | Codec, prores_ks here supports Alpha channel |
pix_fmt | Required | yuva444p10le needed for ProRes alpha channel |
Note: In my testing, using a sequence of TIF images actually takes up lessspace and is higher overall quality than transcoding to ProRes. Testing wasnot rigorous though so take this feedback with a grain of salt.
Using FFMPEG to Preprocess Video for Editing
For the rest of this article I'm going to walk through the procedure I willuse to recreate a video clip that looks like this:
As you may or may not know as part of a hobby and wish to develop a new set ofskills I have been recording gameplay footage of an old timey DOS game calledWizardry 7: Crusaders of the Dark Savant. I played the game for years as ayoung lad and figured I would like to share the game with the world since itwas so important to my adventures growing up. The game normally plays in a 4:3DOSBox window and with some gentle massaging I have stretched to a 16:9 aspectratio and overlayed an out of game map into the video.
All well and good, what does this have to do with ffmpeg?
It turns out that my old workstation and the handy Davinci Resolve suitedon't really get along. It crashes all the time sadly, despite using updateddrivers, magical fairy dust, and ground up chili peppers. I resorted tousing ffmpeg to preprocess a number of video scenes so that Resolve has fewerthings to complain (crash) about.
Particularly, the map element you can see in that image above does not existin the original game, I am overlaying it ontop of the video when appropriate.Moreover it is being recorded from an out of game mapping tool and encodedinto the same video stream as the game recording.
Here is where ffmpeg comes into play for my workflow. First thing, I need tocrop and resize the gameplay and map into distinct video files. This is whereffmpeg really shines since it allows you to created complex video processingschemes and chain together all sorts of operations.
I am not going to walk step by step through the mechanisms that ffmpeg uses tocrop video content here, this has beencovered elsewhere sufficiently well that Idon't think it's worth trying to recreate that authors guidance here.
With feedback from the above article I created a fancy batch script topreprocess my video content, lets take a look at it:
Essentially what is happening here is this, for every .mkv file in thedirectory, I am cropping the map and gameplay video to distinct video streams.Before saving the streams I am passing the gameplay audio through a highpassfilter to remove some low-frequency sound artifacts caused by the emulation ofthe MPU401 sound device.
Running the script as is, results in creation of two new MP4 video files,the first being the gameplay video combining the audio track from the computerand my microphone. The second video file just contains the video of thecropped map which in turn looks just like this
This is certainly a start but I would like to recreate the same type of effectthat I was able to generate in Davinci Fusion. The entire point of the first1000 words of this article was to introduce the fundamentals required tocreate an alpha masked video element that can be imported easily with itsalpha channel intact into my editor of choice.
Alpha Manipulation with FFMPEG
As I am writing these words I don't actually know how to go about what I amtrying to do here but lets take a look at how ffmpeg handles drawinga canvas. We'll give ourselves a simple goal at first, lets set an entirevideo to 20% transparency and export it as a VP9.
That wasn't so hard, but lets take a look at the command.
- Use
-f lavfi
as input format to the libavfilter virtual device - Set the filter input source using the
-i
flag (not-vf
) - Provide arguments as complete key-value pairs, like:
color=color=red
- Select the
yuva420p
pixel format for compatibility with vp9 alpha export.
Note: If you are planning to export to ProRes4444, be sure to use theyuva444p10le pixel format instead.
Ok so let's try something a little more complicated, lets draw a diagonal lineand make the rest of our generated video clip transparent. We use the geq
filter to do this because it allows us to directly manipulate thealpha level of every pixel directly using an expression. I will drop the@.2
from the color filter as well since it won't be needed after this.
Note: Before I continue I should mention that I don't have any earlierexperience with the geq
filter and similar advanced ffmpeg filteringtechniques. All the examples you are reading here are created from snippetsI have found left in old emails and user questions on the web combined witha snippet of late night coffee and figuring it out for myselfedness.
Lets disect the geq
filter written above:
As best I can tell, every geq
filter command requires a luminanceexpression. The p(X,Y)
expression here returns the current luminance valuefor each pixel and sets it to itself. Since we need some expressionhere, setting the image to its source values is preferred.
The second part of the filter is the optional alpha channel expression,where we manually calculate the desired alpha channel for every pixel. In ourexample here if(eq(X,Y),255,0)
is our expression, which says roughly ifthe X and Y coordinates are equal, set the current pixels alpha channel to255, otherwise set it to 0. What results is a straight diagonal line thatyour grade school algebra teacher taught you all about in grade 7. Try it foryourself, and fiddle with the expression, you can generate any angle you wantif you remember the old slope formula:
Evaluating FFMPEG Expressions
Drawing the line was pretty straight forward, but how did I know to use theif(eq(X,Y),255,0)
? Well truthfully I saw an example of this on some oldmailing list archive, but it dawned on me that ffmpeg has a large collectionof expressions that we can use.
The expressiondocumentation on the ffmpeg website lists a full set of expressions that wecan use when designing our formulas. But I will share a handful that catchmy eye:
Expression | Description |
---|---|
if(A, B, C) | If (A neq 0) return (B), otherwise return (C) |
gte(A, B) | Return (1) if (A geq B) |
lte(A, B) | Return (1) if (A leq B) |
hypot(A, B) | Returns (sqrt{A^2 + B^2}) |
round(expr) | Rounds expr to the nearest integer |
st(var, expr) | Save value of expr in var from ({0, dots, 9}) |
ld(var) | Returns value of internal variable var saved with st |
There are also a number of predefined variables:
Variable | Description |
---|---|
N | The sequential number of the filtered frame, starting from 0 |
X, Y | Coordinates of the currently evaluated pixel |
W, H | The width and height of the image |
T | Time of the current frame, expressed in seconds. |
Drawing shapes in FFMPEG with Alpha!
Alright so lets practice a bit with the geq filter and see if we can createcommands that can draw some simple shapes and patterns.
Set lower third of video opaque:
This might catch you by surprise but look at the expression gte(Y,2*H/3)
or (Y geq frac{2}{3}H). It may seem odd that setting the lower third ofthe video to opaque is done by filtering for pixels with a (Y) coordinategreater than (frac{2}{3}) of (H). The reason for this is that the topleft of the image is actually ((0,0)) and the (Y) coordinate goes upas we move towards the bottom of the image. There is surely some sort ofofficial name for this but you need to become familiar with working in thismirrored positive coordinate space.
Lets try another, lets fill in the top right diagonal with opaque pixels:
Now, pixels where (Y leq X) are transparent and others are opaquewhich gives this pattern you can see here where the bottom left diagonal halfof the image is transparent.
So far so good? I hope your algebra and graphing lessons from when you were 12are coming back.
Draw a circle in FFMPEG
Lets try something more complicated and try to draw a circle in ffmpeg witha radius of 100 pixels. But before we get into this let's go back to grade 7math again because we need to remind ourselves about the identities relatedto circles.
A circle's radius (R) is described by Pythagoras:
[X^2 + Y^2 = R^2]So if we want to draw a circle we just need to set any pixel opaque where:
[sqrt{X^2 + Y^2} leq R]Also do you remember my table of fancy expressions available in ffmpeg?Specifically, we should use hypot(A, B)
as part of our command, so it shouldlook a bit like this:
Well, that doesn't look correct. What have we done wrong? Well we've drawnthe circle around the origin of the our coordinate base so we end up missing75% of our circle! We need to translate our circle over a smidge, so letsfigure out the math behind translating a circle.
It turns out that a quick read of my grade 3 math textbook indicates that theorigin of a circle can be translated to any point ((A,B)) with the followingidentity:
[(X-A)^2 + (Y-B)^2 = R^2]So far so good, and if we want to place our circle in the center of ourcanvas we need to use (A=frac{W}{2}), (B=frac{H}{2}) which leads to:
Adapting that to our ffmpeg command we can move the origin to the centerof our canvas like this:
Picture Instruments Chroma Mask 2 0 1000 Mg
Tada!
Alpha Masking with FFMPEG
Ok kids, we're finally getting to the meat and potatoes of the article. We nowunderstand the basics behind drawing shapes, and we want to use those shapesas masks to make part of a video transparent. Do you remember that image ofthe raw map from my classic computer game? Well, lets take what we havelearned to mask out a circle in that map video
We will need two new ffmpeg filters alphaextract
and alphamerge
. Thealphaextract
filter takes the alpha component and outputs grayscale, thealphamerge
filter takes grayscale input and a target video source and masksthe target.
Usage of these commands looks a little bit like this.
This is how you add a circular mask over a video with ffmpeg! Cool.
Finishing touches
We still are not quite finished, as you may have noticed I did not take thetime to feather the alpha mask on the map video above so the edges of thevideo are quite sharp. I could have written an expression to soften the edgeof the mask, and I may show how to do that in a future article, but in thiscase I want to actually apply another effect to my map entirely.
I want to take this image that I drew in CorelDRAW using my vast experienceof occasionally making posters once every half decade and overlay it ontoour masked map to create a more polished final result.
The result looks like this:
Summary
Picture Instruments Chroma Mask 2 0 1000 Lumens
What should you take away from this extensive article? Well, for one thing Ihope that I have demonstrated that you can create decent composite videowith ffmpeg. Certainly the user interface is not fluid or easy to understandlike the more robust tools (eg. Davinci Fusion or After Effects), but withpatience you can absolutely create decent effects. Moreover, the platform isabsolutely stable in comparison to Resolve, which as I mentioned is completelyhideous to use on my computer.
I think that scripted systems like the one we created here today truly havetheir place in handling pre-rendered scenes and animations. I don't have anyidea how much value this kind of documentation is to the world at large, sinceI am fairly certain that people generally just want to use a point and clickenvironment to produce their whirlywoos but I'm glad that I took the time towrite everything down here. I know I will personally be using scripts likethe one we created.
Thanks for reading! Also take some time and watch my derpy YouTube stuff tosee how it all turns out.