Design Templates XD October 23, 5. Photoshop August 3, MB. Sketch Library Sketch July 29, 5. Design Templates Sketch July 29, 4. XD October 23, 6. Photoshop July 29, 39 MB. Production Templates Sketch July 29, 2. XD October 23, 3. Photoshop July 29, 3 MB. Sketch Library Sketch July 26, 7. XD April 13, 9. Production Templates Sketch January 26th 1. Photoshop January 26th 1. User Guide. Design Guidelines. XD October 22, 1.
For developers the fixed resolution made it very easy to design for iPhone in that very early years, and everyone was happy. But enough is never enough, so circa we've got iPhone 4 and its retina display that doubled the resolution keeping the same screen size. The idea was that with such high res you couldn't notice the spacing between pixels and rounded edges would be naturally smooth.
That bump on the screen quality was very impressive and had a very good customer reception. But it comes with its own issues to developers. To take advantage of such high resolution our apps needed to adapt the layout of the apps and provide images that doubled its own density or they would look so small on the screen that would be impossible to interact with. Software upscaling was possible, but wouldn't produce that sharp edges that users were looking for. And Apple, being Apple, started sometime to force apps to be compatible with the retina display, otherwise they would be rejected on the review.
There was no escape for developers! This is when image asset management hell was the born on iOS projects! For now own we needed to provide variations of the same image to match with the screen resolutions. Of course that tools were created to help with this new problem. They're what this post is about! Given our problem space, let's learn about screen densities! But first there is a few concepts to recall:. So in the example above either the original iPhone and the iPhone 4 had an 3,5' inch display.
While the original has a ppi, iPhone 4 had an stunning ppi! But what does this means in practice? A picture still worth a thousand words no pun intended The image above was taken from the article "Real retina vs. On the left we have an image presented of the non-retina one. We can see that there's a lot of spacing around each of the dots that compromises the image quality. On the right we have the same picture presented on a retina display. This is a screen that has the same physical size of the left one but the resolution is doubled.
We can see that the spacing aruond each pixel is much smaller, producing a higher quality image with much more details. The rounded edges are a lot smoother, as an example. The picture above is a close-up. We normally don't use a screen to close to our eyes to see it in that detail.
So in effect when using a retina display you shouldn't be able to notice the spacing around pixels. The trick is revelead! If you're curious about all the iPhones screen size, resolution and PPI there's an excelent reference on this site.
Historically developers of graphical interfaces always thought of developing UI's based on the screen resolution alone, and this was also true for the first generation of iPhones. We had a single device and it had a single display size and resolution where we draw our elements based on that available space, by doing simple calculations and rendering on absolute coordinates. With the introduction of retina displays we had a new higher resolution to cover which would immediatly break compatibility of our app with the new device.
The important thing to note is that increasing the screen resolution is all about image quality , as we show above, and not giving more drawing space. The iPhone 4 display had the same physical size but doubled the resolution so where we had 1 pixel now we had 4 pixels in the same space allowing us to represent much more detail.
But wait, what?! You told me that it's the double of the resolution, how do we have 4 pixels to 1 ratio? I'll try to picture that for you:. Let's pretend that the image above represent two slices of the same physical size, one cut from an original iPhone and the other from an iPhone 4. Each square represents a pixel, and its easy to see that where there was a single pixel on the first now fits four of them. Since we doubled the resolution on each axis we actually have 4 times many pixels than the previous generation.
Another way of seeing it is that we can fit 4 screens of the original iPhone on just one screen of the retina display. When running our app on a retina display we want to keep the UI elements with the same physical size and its just a matter of doubling the size of the elements themselves.
But we don't want to manage different sizes for different devices. This seems the kind of problem that can be solved by software, and this is exactly what the point measure does. This abstracts away our concerns about the density of the screen. From the developer point of view, he was still working with a canvas of x points. It is now up to UIKit to decide the actual size of an element depending on the device that the rendering is done. Ok, so this is quite a lot to grasp!
But now we can advance to talk about images at least Have you ever asked yourself how computers represent images? Alghouth this may be some basic knowledge to most developers, it's a good theme to revisit. When it comes to modern computer graphics there are two main ways of representing the images: bitmaps and vectors.
Bitmaps are just a matrix of rows and columns where each intersection is painted with a color. Bitmaps are best for representing pictures and highly detailed images, just like this one:. Well, its not a masterpiece but I hope it made you smile! Of course its just a black and white image. But turning the bits into bytes let us represent a wide palete of colors for each pixel. Here's an example:. The image is a reproduction from Adafruit authored by Carter Nelson.
Here each pixel represet a number and by applying a pallete, which maps each number to specific color, we now produce a colored bitmap. The higher the number the most colorful the image can be. Here we can also see that bitmaps has a fixed width and height, which represents its big downsize.
Nowadays we have very good algorithms for shrinking or enlarging bitmaps, but there's always some loss. Now we come to the vectors, which are more cool to programmers! They're are a mathematical representation of the shapes and glyphs that forms an image. This means that they can be easily scaled which make them very desirable for the responsive designs that we strive to produce in web and mobile apps. To better understand vector images lets dirty our hands just a little of code.
PaintCode is a drawing app like Sketch or Illustrator where you can make your designs. What makes it different is that instead of saving it to a file, PaintCode generates the code necessary to draw your image on screen with UIKit.
The code itself is not important, but I want you to get a sense that it represents a series of instructions on how the image should be draw. In this example the image has a fixed size, but notice how we can easily introduce variables to resize it as you need, which will preserve its sharp quality.
There's a lot of debate around which type is better, and, as always when it comes to software engineering, the answer is: it depends!
Each representation has its pros and cons and you should evaluate your usage scenario and requirements to take a better pick. Generally speaking bitmaps are usually better for photography or highly detailed images where vectors deals better with shapes and abstract drawings.
Always check with your designer, they usually now what's best! The discussion about bitmaps vs. Either bitmaps and vectors can be stored in several different file formats. This is because the way we represent this image as a data structure is not necessarly the best way to store it.
For instance, to save a plain bitmap to a file we need to store the color of each pixel in a matrix which is very wasteful. The launch screen is displayed as soon as the user taps your app icon, and it stays on the screen until your main interface is displayed.
If your app is running on iOS 8 or later, the system uses a launch screen from a storyboard file and sizes it appropriately for the screen. For deployment targets prior to iOS 8, you add a set of launch images to an asset catalog for each of the possible screen sizes. New projects are created with a launch screen storyboard file called LaunchScreen.
The launch screen uses size classes to adapt to different screen sizes and orientations; see Building for Multiple Screen Sizes for more information. You are also limited to UIKit classes that do not require updating. For more information, see Creating a Launch Screen File. To set the launch screen, open the General information tab for your target, and select the launch screen file from the pop-up menu.
For more help icons, launch images, and the asset catalog, see Xcode Help. You can easily capture screenshots for launch images on a device. On the device, configure the screen the way you want it to appear. Then press the device Lock and Home buttons simultaneously. Your screenshot is saved in the Saved Photos album in the Photos app.
Copy the screenshot from the device to your Mac.
0コメント