How to resize image in Swift
While keeping the aspect ratio. Meaning the resize will be without stretching.
Published: May 11, 2022 Sponsored I want a Press KitResizing images in Swift is nowadays pretty straightforward. Primarily thanks to UIGraphicsImageRenderer
, a high-level yet very performant API we will use together with UIImage
.
The main challenge when resizing is to determine the correct size. We want to preserve the aspect ratio so the resized image is not stretched in any way. Resizing images this way is often done as a preparation for upload.
Keeping the aspect ratio
While this can be somewhat easily calculated, why bother, since AVFoundation
can help us with the AVMakeRect
method. It will return the largest size that fits in size that we specify, keeping the aspect ratio.
We can start by defining the maximum size we want the resized image to be.
let maxSize = CGSize(width: 1000, height: 1000)
And next we can use the AVMakeRect
to get the resulting size of the image:
let availableRect = AVFoundation.AVMakeRect(aspectRatio: image.size, insideRect: .init(origin: .zero, size: maxSize))
let targetSize = availableRect.size
Suppose we have like 2000×1000 (for easy maths) image we want to resize. Since our max dimensions are 1000×1000, the resized size, in this case, will be 1000×500 pixels.
This is what the AVMakeRect
will return to us.
Resizing image
Once we have the size, we can create an instance of UIGraphicsImageRenderer
and render the existing image into a smaller size.
There is just one last thing to take care of. When resizing images, we usually think of pixels, but iOS uses points. Currently 1pt = 3px on the new screens. And the image renderer uses the system scale when drawing.
So if we were to pass it the targetSize
, it would render an image that is 3000×1500 pixels, which is not what we want.
Fortunately, there is an easy solution. We can change the scale of the renderer by creating UIGraphicsImageRendererFormat
like this:
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let renderer = UIGraphicsImageRenderer(size: targetSize, format: format)
And finally, we can resize the image:
let resized = renderer.image { (context) in
image.draw(in: CGRect(origin: .zero, size: targetSize))
}
So the entire code looks like this:
let maxSize = CGSize(width: 1000, height: 1000)
let availableRect = AVFoundation.AVMakeRect(aspectRatio: image.size, insideRect: .init(origin: .zero, size: maxSize))
let targetSize = availableRect.size
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let renderer = UIGraphicsImageRenderer(size: targetSize, format: format)
let resized = renderer.image { (context) in
image.draw(in: CGRect(origin: .zero, size: targetSize))
}
You can easily create an extension from the code above and perhaps pass the target size as a parameter.
What about SwiftUI?
The above uses UIKit APIs; however, you can easily adapt the code for SwiftUI since you can create SwiftUI Image
from UIImage
.
So once you have the resizing done, you can do something like this:
let swiftUIImage = Image(uiImage: image)
Currently, there is no way to get UIImage
from Image
. So you need to instantiate it from some other source (like URL, Asset catalog name and similar), resize it, and then create Image
.
Bonus: Getting image pixel size
The above resizing works great, but it is pointless for images that are already smaller.
To determine if we need to resize the image, we can first check the size of the original image. This is as trivial as checking the size
property because of the point != pixel reality.
However since UIImage
has scale
property, we can calculate the pixel dimensions by multiplying the points size by the scale. Perhaps with extension like this:
var pixelSize: CGSize {
let heightInPoints = size.height
let heightInPixels = heightInPoints * scale
let widthInPoints = size.width
let widthInPixels = widthInPoints * scale
return CGSize(width: widthInPixels, height: heightInPixels)
}
It is a bit more verbose to clarify what we are doing.