Wednesday 18 May 2011

Understanding the image reflection

There is a sample provided by Apple to demonstrate UIImageView reflection effect as implemented in iTunes and iPod player applications. Although it is a small sample but it covers number of concepts which if understood properly could be very helpful for developers and could be used  in other scenarios. In next couple of posts, I would like to cover following concepts which are used in this example:
  1. Image inversion.
  2. Gradient
  3. Masking
Today I will discuss image inversion. In my next posts I have discussed Gradient and Masking. If you are looking for complete example which includes image inversion, gradient and masking then you can refer my post about image masking. For this post, I have provided sample code which you can download and test on your machine. The output of our application will look like as in the picture below. Note that we not only want to vertically invert our image but we also want to control the height of reflection which is being displayed.



Before we delve into the code we will discuss some concepts involved in image inversion.

Translation
The translation refers to moving the origin (x,y) of coordinate system to another point (dx,dy). For example if the current origin of a graphics context's coordinate system is (0,0) and if we want to move it to (10,10) then we can do so by changing values in current transformation matrix by using function CGContextTranslateCTM(contextRef, 10,10).
Translation
Scaling
Scaling refers to increasing or decreasing the size of an object. Scaling is done by multiplying scaling factors with each point in the diagram. In two dimensional coordinate system we have two scaling factors one for each dimension i.e Sx and Sy. If Sx and Sy are equal i.e Sx = Sy then the scaling is called Uniform scaling. In this case size of the object changes but shape remains same. If Sx is not equal to Sy then scaling is called Differential scaling. In this case both size and shape change after scaling. For example if we have a square shape with a length of each side 2 and if we increase the side length to 4 using scaling factor Sx = Sy = 2 then it will still remain square but with larger sides as shown in figure. We can scale using function CGContextScaleCTM(context, 2 ,2).
Scaling


Context Coordinate Systems
In iOS coordinate system of views(UIView) and layers(CALayer) start from upper left corner. Where as coordinate system of graphics contexts such as Bitmap, PDF and other custom contexts starts from lower left corner as mentioned below. So if we draw a diagram both in UIView and Bitmap context then they would be in different orientations with respect to each other.

Putting All together
Now it is time to put all ideas together and the best way would be to demonstrate it through code.

First we will define two UIViews in a ViewController class.
@property (retain) UIImageView* imageView;
@property (retain) UIImageView* reflectedImageView;
Next We will load original image into imageView property.
//initialize main image view.
-(void)initImageView{
 //load image
 UIImage* image = [UIImage imageNamed:@"bus"];
 CGRect imageViewFrame;
 imageViewFrame.size = image.size;
 imageViewFrame.origin = CGPointMake((self.view.frame.size.width - image.size.width) / 2 , (self.view.frame.size.height - image.size.height) / 2);
 imageView = [[UIImageView alloc] initWithFrame:imageViewFrame];
 imageView.image = image;
 [self.view addSubview:imageView];
}
In this function we adjust the size of image view according to our image and then add image view into our controller's view as a subview.

Next we initialize reflectedImageView property.
//initialize reflected image view.
-(void)initReflectedImageView{
 // create the reflection view
 CGRect reflectionRect=self.imageView.frame;
 
 // determine the size of the reflection to create. The reflection is a fraction of the size of the view being reflected
 NSUInteger reflectionHeight=self.imageView.bounds.size.height*reflectionFraction;
 
 // the reflection is a fraction of the size of the view being reflected
 reflectionRect.size.height=reflectionHeight;
 
 // and is offset to be at the bottom of the view being reflected
 reflectionRect=CGRectOffset(reflectionRect,0,self.imageView.frame.size.height);
 
 reflectedImageView = [[UIImageView alloc] initWithFrame:reflectionRect];
 
 // create the reflection image, assign it to the UIImageView and add the image view to the containerView
 reflectedImageView.image=[self reflectedImage:imageView withHeight:reflectionHeight];
  
 [self.view addSubview:reflectedImageView];
}
In this function first we get the frame of imageView property which we have set in earlier function. If you remember we discussed earlier that in some cases we might only want to display partial reflection of our original image. To do this we have defined a constant reflectionFraction whose value can be set between 0 to 1. This constant is used to determine the height of reflection we want to display. 0 means no reflection and 1 means full image reflection. After determining the height of reflection, we calculate the position of reflectedImageView. The position of reflectedImageView would be just underneath the imageView. After this we call [self reflectedImage:imageView withHeight:reflectionHeight] to get the reflected image. We then add this reflectedImageView into controller's view.

Our main function is [reflectedImage:imageView withHeight:reflectionHeight]. This is where
we combine all the concepts we have discussed today. So lets have a close look at this function.
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
    if(height == 0)
  return nil;
    
 //create RGB color space.
 CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
 // create a bitmap graphics context the size of the image with height given in the function parameter.
 CGContextRef bitmapContext = CGBitmapContextCreate(NULL, fromImage.bounds.size.width, height, 8, 0, colorSpace, (kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
 
 //Bitmap context coordinate start from bottom left. Set the coordinate to top left just like UIView and CALayer.
 CGContextTranslateCTM(bitmapContext, 0.0, height);
 //flip the coordinate space so that when is image is drawn it is drawn upside down.
 CGContextScaleCTM(bitmapContext, 1.0, -1.0);
 
 //draw image in the context.
 CGContextDrawImage(bitmapContext,fromImage.bounds, fromImage.image.CGImage);
 
 //Get the image from the context.
 CGImageRef reflectionImage = CGBitmapContextCreateImage(bitmapContext);
 CGContextRelease(bitmapContext);
 CGColorSpaceRelease(colorSpace);
 
 UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
 CGImageRelease(reflectionImage);
 
 return theImage;
}

In this function we create reflected image using bitmap graphics context. First we create an RGB color space and then we create bitmap graphics context with 32-bits per pixel. We set the height of graphics bitmap context to the reflection height which we calculated in initReflectedImageView function. After this we translate the current transformation matrix to CGContextTranslateCTM(bitmapContext, 0.0, height). If you remember from our earlier discussion that bitmap context starts from lower left corner. By translating to (0.0, height) we our moving the origin of bitmap context to top left corner same as UIView. However recall that in bitmap graphics context drawing starts from lower left corner and it moves upward. So if we draw any image in this context it will be out of bitmap graphics context as shown in figure.

In order to draw within bitmap graphics context we would need to flip y-axis. We can do this by scaling y-axis to -1 by calling CGContextScaleCTM(bitmapContext, 1.0, -1.0). Please note -1 in the last argument of this function. So if a pixel in original image was drawn at (85, 92). It would now be displayed at (85, -92) due to negative y-scaling. After calling translation and scaling our result would look like as:
 We use CGContextDrawImage to draw image into bitmap context. It receives bounds of image and a reference to image to draw it into graphics context. You might have already noticed that our image is not inverted after calling CGContextDrawImage. So how is it inverted? Basically after we obtain CGImageRef from CGContextDrawImage function we convert it into UIImage by calling [UIImage imageWithCGImage:reflectionImage]. According to Apple's documentation when UIImage is intialized using CGImageRef it changes the orientation of CGImageRef in similar fashion as we have done through translation and scaling. This is the how we get inverted image from our function.

You might wonder what if we had not called translation and scaling in the first place. After all as we have seen Bitmap context draws images in inverted format and this is what our goal is. I would encourage you to comment translation and scaling lines in the source code and observe what happens. What you will see that reflected image will be drawn in same orientation as main image. This is due to the fact that we call [UIImage imageWithCGImage:reflectionImage]. This is the reason why we need translation and scaling in this case. If we want to avoid scaling and translation then one solution would be to draw the output of bitmap graphics context in drawRect function before converting it into UIImage object.

In my next post I will discuss about Gradient which is used in many iOS controls. Therefore it is important to understand this concept. Please feel free to post your comments or feedback.

1 comment:

  1. i need to know how to make a reflection of something in window into view. u can contact me at prodigychild.cr@gmail.com

    ReplyDelete