
As you read in the title, I think iOS is the dumbest OS in 2025. Even after the release of Apple Intelligence, it doesn't feel like Apple nailed this area. Every year, I keep an eye on what's new with iOS, but every time I find it hard for me as a Pixel user to even consider switching. So, let me show you why.
One of the biggest issues I have with iOS is the typing experience. And unfortunately, most of the pain points I have are still here. For example, we still don't have a clipboard manager that can allow me to save some text and photos for future use. The dedicated numbers row is still missing, and I don't have quick access to the emojis like in Gboard, but I have to go to this page every single time. Not to mention that in some thirdparty apps like WhatsApp and Telegram, I get these suggestions based on the last message I received that I can immediately use and save myself some typing. Not to mention that in voice typing, there is a day and night difference. On Pixel phones, you can use some voice commands to edit your text and send it without even touching the keyboard. In contrast, Apple's dictation doesn't support this feature.
So, it types whatever you say, and at the end, you have to physically interact with the phone to take the action. Plus, Google added one more feature to the assistant voice typing that makes it easier to use by converting it into a floating bubble that you can pin anywhere on the screen, which will give you a cleaner look. While on Apple, you have to start the keyboard to be able to use the dictation every single time. On top of this, Pixel phones do support a feature called live translate. At any point of time, if you receive the message in a different language, you can simply tap on this button which will immediately translate the message for you in line and you can reply back in the same language.
While on iOS, to be able to translate any text, you have to tap and hold on it like this and then drag your finger over the text and then choose the translate option and then it will be able to translate it for you. Plus, you don't have the ability to reply back in the same language if you want to. Copying and pasting photos and text also plays a very important role in the overall typing experience. And on my Pixel phone, it's much easier. Let's say I came across one of the photos on the web. So, I can copy it and immediately tap on the bubble to start adding some edits and then share it right away from here. While on the iPhone, if I did the same, when I copy the image, it doesn't give me a quick way to edit the photo. So in this case, I have to edit it inside the third party app I'm using or I have to save it first to my phone's gallery by tapping and holding and then save to photos and then open the photos app then apply my edits like this and then choose the markup app.
So as you see it's a very long process to do the exact same thing. So the pixel wins in this area. So far, nothing changed since iOS 16. And the same pain points I have are still here. But with iOS 18, there are two more features added. The first one is the writing tools, which will allow you to select any text and then tap on writing tools. Here you can describe any change you want. You can proofread your text, rewrite it, choose different styles, convert the information into a summary, key points, list or table, and finally use a chat GPT to write the whole thing for you. In this area, I think Apple did better than Google for allowing the writing tools to be accessible throughout the OS.
So, you can use it in native apps and third-party apps. While on Pixel phones, you don't get the option to modify your text in third party apps using AI, but it's only available in native apps. So, for example, in Google messages, you get the magic compose feature, while in Gmail, you get another feature called help me write, which works differently from the one in Google messages. And we have another one in keepotes called help me create a list which creates a list using AI.
As you see, you don't get the same consistent experience across apps. So I wish of Google can implement its writing tools the same way as Apple. So the win goes to Apple in this one. The second new addition with iOS 18 is the Genoji feature that you can access directly from the keyboard. Here you can describe whatever emoji you want to create.
And the best part is the ability to choose certain people from your gallery to create an emoji that looks like them. And here's one of the examples. So, for example, if I want to create an emoji of myself wearing a hat. This is how it looks. Let me choose the person first and give it some time. And here you go. You can create a funny emoji that you can share with others. And here is how it looks in the chat. Pixel phones do offer a similar functionality, but it requires opening a different app called Pixel Studio, which is only available on the Pixel 9 models.
From here, you can create whatever emoji you want, but unfortunately, it doesn't support the phone's gallery integration like on iOS. So, you won't be able to create stickers of people you already know. So, overall, in generating emojis, I will also give the win to iOS for the easier access and the gallery integration. But I will give the overall win in the typing experience to the Pixel for offering much more useful features like the dedicated numbers row, the faster access to the emojis, the better voice typing, the clipboard manager, live translate, suggested phrases in thirdparty apps, and more.
While all the advantages you get with iOS 18 are the better writing tools and Genoji, which to be honest are just secondary features that none of us will use daily. So that's it when it comes to the typing experience. My second big issue with iOS devices is the lack of photo editing features. And I think in 2025 things remain the same. The only new feature we got in this area is called the cleanup feature that will allow you to remove unwanted people or objects from your photos, which is something Google released back in 2021 under the name of magic eraser.
But later, Google updated the magic eraser and it's now part of the magic editor that uses generative AI. And in comparison, it will give you much better results when compared to the basic way that Apple uses in the cleanup feature. So that's all you get with the iPhone. But if you own a Pixel, it feels like having a Swiss knife. No matter how bad is your photo, you can simply fix it in post, like repositioning anything in the frame and let the AI do the work for you. Use the autoframe feature to properly place your subject in the frame and the AI will generate the missing parts for you. Or use the reimagine background feature to completely change the way your photo looks. And there are even more under the magic editor, but we're still scratching the surface.
We have the zoom enhance feature that can allow you to crop in a lot more without losing that much of quality. I have the ad me feature that I found myself using every single time in family gatherings. So, as you see here, I added myself in this photo and it looks very convincing. And of course, best take that can help me fix my group shots and make everyone look their best. I have photoblur that can help me fix my old blurry photos and make them look much better. In the video editor, I have a very cool feature called audio eraser that can identify different speakers and sounds in the video. And I can precisely choose which sound I want to adjust. I can mute it completely or make it much louder. And the best part about this feature is it works with every video you have in your gallery, no matter how old is it or if it's recorded with this phone's camera or not. Similarly, Apple offers a feature called audio mix that enhances the speech in your videos by offering four different presets to choose from.
But the problem here is you don't have any control over individual sounds. It's either to mute everything or nothing. All you can do here is to increase or decrease the effect intensity. The second problem is the video has to be recorded on the iPhone 16 models for the feature to work. There are even more editing features offered by Google Photos, but these are the top ones that make a huge difference in my experience. And it shows how Apple lacks behind big time in this area. Even 1 UI offers most of the photo editing features I showed you in this video. So, it feels like Apple is the only one lacking behind. Beside the photo editing, Apple added two new features to Apple Photos with iOS 18.
The first one is the ability to create memories using AI. But we have a similar feature on Pixel phones and Google Photos. You can either type the command you want to use or you can choose from the different presets and both work pretty much the same way. The second improvement is the enhanced search. Apple now uses AI to locate your photos. So you can use phrases like winter 2025 and it will immediately show you the results. I didn't get the equivalent feature of Ask Photos on my Pixel phone just yet to be able to compare, but I do really like this addition in Apple Photos.
So, overall in this category, the win goes to the Pixel hands down. Now, let's talk about the virtual assistants on each device, Siri versus Gemini, to show you why Apple lags behind in this area. The first problem is Siri doesn't know how to answer complex questions. And when it happens, it will show you a pop-up to either choose between web results or ask chat GPT. The first option is very basic and doesn't add any value. I can simply open Google and do the search myself. While the latter will take your question and pass it over to chat GPT to give you the answer, which takes longer, but Apple gives you the option to say the word ask chat GPT before the command to pass it over directly without the need to interact with the device. Either way, I don't think it's intuitive because you have to say the phrase ask a chat GPT every single time or interact with the pop-up if you forgot. In contrast, when you ask Gemini a question, it gives you the answer right away, which is much better. The second problem is you get the bare minimum with this integration.
So, for example, if you want to talk live with the chat GPT, there is no way to do this through Siri and you have to download the chat GPT app to be able to do so. In contrast, talking live to Gemini is just a click of a button away. Additionally, Gemini Live got the ability to use the camera and ask you questions about whatever you see from products to landscapes and more. Or use the screen sharing feature to talk live about whatever you see on the screen. You can do things like asking questions about the videos you watch, summarize the information you see on the screen, ask questions about the photos, and many more.
Those two features are dynamic, so you can jump from one thing to the other without the need to stop. Just show it whatever you want to talk about. Plus, they are available completely free of charge on the Pixel 9 and the S25 models without the need to have a subscription. In contrast, you don't get those two features on iOS 18. And if you want to do so, you have to download the Chat GPT app and subscribe to the Chat GPT Plus plan to be able to get the video and screen share. And the last problem I have in this area is the broken promises. Apple talked about three new features coming to Siri that are yet to be released after more than 6 months since the official release of iOS 18, which are the onscreen awareness, personal context, and take action in and across apps. They look great on paper, but after the delayed release and what I've seen so far, I'm not very optimistic that these features will work as advertised. Now, let's talk about Apple's visual intelligence. This feature uses Google Lens to find visual matches online or chat GPT to ask you questions about the image which is the same concept used in Siri.
So it suffers from the same limited integration. For example, when I use the search option, all I get is some web results while Google Lens in Google's app gives you far more options like the ability to add extra words to my search to look for a different variant of the same product or use it to solve math problems. And none of these features are available in Apple's visual intelligence. Similarly, when I use a chat GPT, it offers a textbased interaction with no voice input, which is not as good as using the live camera in Gemini, which is much more advanced and flexible.
Another thing that I don't like in Apple's visual intelligence is I'm not able to give it a photo from my gallery. Sometimes I don't have the physical product with me and I have a screenshot or an old photo. In contrast, in Google Lens, you can simply open your gallery and choose whatever photo you have and it will try to give you the results. Plus, on Android, I have circle to search that makes it even easier for me to identify products.
So, at any time if I saw something interesting, I can trigger circle to search precisely highlight the part I'm inquiring about and immediately get the results or even use it to identify the background music playing in videos, which is very cool. Last but not least, Apple made the camera control a requirement to get the Apple intelligence feature, which doesn't make any sense because I can simply download Google and the Chad GPT apps, get the same functionality completely free of charge without the need to buy the new iPhone 16. So, that's it when it comes to Apple's visual intelligence. Now, let's talk about multitasking. Since my last comparison between iOS and Pixel UI 2 years ago, nothing changed in this area, which is very disappointing. So, for example, on my Pixel, I can simply start a split screen.
So, if I have a photo that includes some numbers, I can open the photo and the calculator side by side and start adding the numbers, which will make my life much easier. But if you want to apply the same scenario on iOS, you have to do this. You have to open the photo and then go to the calculator app and then keep switching between them to copy the numbers, which will take you a very long time. Considering that a phone like the 16 Pro Max has a 6.9 in display, so it's big enough to support the feature. But till now, this feature is not available on iOS. Plus, Google added one more feature to the split screen that makes it even more convenient. So, let's say you want to check all your email accounts at once in a split screen view like this.
So, instead of going through the process every single time, you can save it as an app pair. And the next time you want to do this, you will have a shortcut on your home screen that you can tap on and you are good to go. Another issue with the multitasking on iOS is the lack of overlaying controls over other apps, which is a feature supported on Android called display over apps. So for example, in a situation like this, when you start the screen sharing inside Zoom, you won't get any controls on the screen like what happens on Android. So let me show you the difference here. When I start sharing the screen, you will see here I have some controls that I can use and explain to the other person I'm meeting with what's going on by using the annotations.
I can draw things on the screen and so on and so forth. But this feature is not available on iOS. And that's why a feature like the chat heads in Telegram and Facebook Messenger are not available on iOS. So overall, there is almost no multitasking on iOS other than the picture and picture window. And that's pretty much it. Now, let's talk about the notifications, which in my opinion is still far behind Android in a lot of areas.
For example, on Android, I can immediately tell what type of notifications are waiting for me by just looking at the status bar icons. While on iOS, I don't have any indication. And for me to know if I have notifications or not, I have to pull down the notifications shade, which is a bit annoying. But I have to give Apple the credit for adding this new priority notifications feature with iOS 18 that will put the important notifications on top and also give you a summary about multiple notifications at once so you can know what's going on at a glance, which is nice to have. But look at how many features iOS 18 is still missing when compared to Android. In this example, I sent myself the same exact message on both phones. But look at how many things I can do on Android that I cannot do on iOS. Number one, I can reply to the message using the suggested phrases that work really well with the message I received.
Plus, I have the ability to take some actions like mark it as read or reply directly to it from the notifications shape. While on iOS, all you can do is to reply to the message and that's pretty much it. And it doesn't only stop here, but sometimes you might receive a link from someone that you are curious to check, but at the same time, you don't want to open the conversation. With the help of smart replies, you can simply do this. And it might also help you copy a number from a message you received with a tap of a button. And my favorite feature on Android is the notifications history. It will show you the history of notifications you dismissed in the past 24 hours.
But on iOS, once the notification is gone, you won't be able to find it. By this, I raised all the points that I think Apple needs to focus on to make iOS more functional, not just a good-looking operating system. It's also worth noting that there are a lot of other things that I didn't talk about in this video that I already covered in my previous comparison that you can find its link in the description below. But for now, thanks so much for watching and see you in the next.