Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Manage App’s Life Cycle In iOS ?).
In this article, We will understand how to manage App’s Life Cycle in iOS. We will discuss how to respond to system notifications when our app is in the foreground or background, and handle other significant system-related events.
For Understanding the App’s Lifecycle concepts, We will discuss on the below topics:
Overview
Respond to The Scene-Based Life-Cycle Events
Responds to App-Based Life-Cycle Events
Respond to Other Significant Events
A famous quote about Learning is :
” One learns from books and example only that certain things can be done. Actual learning requires that you do those things. “
So Let’s begin.
Overview
The current state of our app determines what it can and cannot do at any time. For example, a foreground app has the user’s attention, so it has priority over system resources, including the CPU. By contrast, a background app must do as little work as possible, and preferably nothing, because it is offscreen. As our app changes from state to state, we must adjust its behavior accordingly.
When our app’s state changes, UIKit notifies us by calling methods of the appropriate delegate object:
In iOS 13 and later, use UISceneDelegate objects to respond to life-cycle events in a scene-based app.
In iOS 12 and earlier, use the UIApplicationDelegateobject to respond to life-cycle events.
If we enable scene support in our app, iOS always uses our scene delegates in iOS 13 and later. In iOS 12 and earlier, the system uses our app delegate.
Respond to The Scene-Based Life-Cycle Events
If our app supports scenes, UIKit delivers separate life-cycle events for each. A scene represents one instance of our app’s UI running on a device. The user can create multiple scenes for each app, and show and hide them separately. Because each scene has its own life cycle, each can be in a different state of execution. For example, one scene might be in the foreground while others are in the background or are suspended.
Scene support is an opt-in feature. To enable basic support, add the UIApplicationSceneManifest key to our app’s Info.plist file.
The following figure shows the state transitions for scenes. When the user or system requests a new scene for our app, UIKit creates it and puts it in the unattached state. User-requested scenes move quickly to the foreground, where they appear onscreen. A system-requested scene typically moves to the background so that it can process an event. For example, the system might launch the scene in the background to process a location event. When the user dismisses our app’s UI, UIKit moves the associated scene to the background state and eventually to the suspended state. UIKit can disconnect a background or suspended scene at any time to reclaim its resources, returning that scene to the unattached state.
We use scene transitions to perform the following tasks:
When UIKit connects a scene to our app, configure our scene’s initial UI and load the data our scene needs.
When transitioning to the foreground-active state, configure our UI and prepare to interact with the user.
Upon leaving the foreground-active state, save data and quiet our app’s behavior.
Upon entering the background state, finish crucial tasks, free up as much memory as possible, and prepare for our app snapshot.
At scene disconnection, clean up any shared resources associated with the scene.
In addition to scene-related events, we must also respond to the launch of our app using our UIApplicationDelegate object.
Responds to App-Based Life-Cycle Events
In iOS 12 and earlier, and in apps that don’t support scenes, UIKit delivers all life-cycle events to the UIApplicationDelegate object. The app delegate manages all of your app’s windows, including those displayed on separate screens. As a result, app state transitions affect our app’s entire UI, including content on external displays.
The following figure shows the state transitions involving the app delegate object. After launch, the system puts the app in the inactive or background state, depending on whether the UI is about to appear onscreen. When launching to the foreground, the system transitions the app to the active state automatically. After that, the state fluctuates between active and background until the app terminates.
We use app transitions to perform the following tasks:
At launch, initialize our app’s data structures and UI.
At activation, finish configuring our UI and prepare to interact with the user.
Upon deactivation, save data and quiet our app’s behavior.
Upon entering the background state, finish crucial tasks, free up as much memory as possible, and prepare for our app snapshot.
At termination, stop all work immediately and release any shared resources.
Respond to Other Significant Events
In addition to handling life-cycle events, apps must also be prepared to handle the events listed in the following below points. We can use our UIApplicationDelegateobject to handle most of these events. In some cases, we may also be able to handle them using notifications, allowing us to respond from other parts of our app.
Memory warnings – Received when our app’s memory usage is too high. Reduce the amount of memory our app uses.
Protected data becomes available/unavailable – Received when the user locks or unlocks their device. We use applicationProtectedDataDidBecomeAvailable(_:)and applicationProtectedDataWillBecomeUnavailable(_:) methods to check protected data availability.
Handoff tasks – Received when an NSUserActivity object needs to be processed. We can use application(_:didUpdate:) method to handoff tasks.
Time changes – Received for several different time changes, such as when the phone carrier sends a time update. We can use applicationSignificantTimeChange(_:) method to see time changes.
Open URLs – Received when your app needs to open a resource. We can use application(_:open:options:) method to open URLs.
That’s all about in this article.
Conclusion
In this article, We understood how to manage App’s Life Cycle in iOS. We also discussed how to respond to system notifications when our app is in the foreground or background, and handle other significant system-related events in iOS.
Thanks for reading ! I hope you enjoyed and learned about App’s Lifecycle Management Concepts in iOS. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find Other articles of CoolmonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (Understanding Flexbox Layout).
In this article, we will learn about Flexbox layout in React Native. We will discuss the different props of Flexbox layout for styling a component in React Native.
A component can specify the layout of its children using the Flexbox algorithm. Flexbox is designed to provide a consistent layout on different screen sizes. We will normally use a combination of flexDirection, alignItems, and justifyContent to achieve the right layout.
Flexbox works the same way in React Native as it does in CSS on the web, with a few exceptions. The defaults are different, with flexDirection defaulting to column instead of row, and the flex parameter only supporting a single number.
A famous quote about learning is :
” Wisdom is not a product of schooling but of the lifelong attempt to acquire it. “
So Let’s begin.
1. flex
” flex will define how our items are going to “fill” over the available space along our main axis. Space will be divided according to each element’s flex property.“
In React Native flex does not work the same way that it does in CSS. flex is a number rather than a string, and it works according to the Yoga.
When flex is a positive number, it makes the component flexible, and it will be sized proportional to its flex value. So a component with flex set to 2 will take twice the space as a component with flex set to 1. flex: <positive number> equates to flexGrow: <positive number>, flexShrink: 1, flexBasis: 0.
When flex is 0, the component is sized according to width and height, and it is inflexible.
When flex is -1, the component is normally sized according to width and height. However, if there’s not enough space, the component will shrink to its minWidth and minHeight.
flexGrow, flexShrink, and flexBasis work the same as in CSS.
In the following example, the red, yellow, and green views are all children in the container view that has flex: 1 set. The red view uses flex: 1 , the yellow view uses flex: 2, and the green view uses flex: 3 . 1+2+3 = 6, which means that the red view will get 1/6 of the space, the yellow 2/6 of the space, and the green 3/6 of the space.
2. flexDirection
‘column’/’column-reverse’/’row’/’row-reverse’
Defines the direction of the main axis. Opposite to the web, React Native default flexDirection is column which makes sense, most mobile apps much more vertically oriented.
Determines the distribution of children along the primary axis. justifyContent describes how to align children within the main axis of their container (parent container). The default value is flex-start.
4. alignItems
‘flex-start’, ‘flex-end’, ‘center’, ‘stretch‘
Align items along the cross axis. So in a default view, it will control the horizontal alignment of items. alignItems describes how to align children along the cross axis of their container (parent container). alignItems is very similar to justifyContent but instead of applying to the main axis, alignItems applies to the cross axis. The default value is stretch.
For instance, if the parent has a flexDirection set to row. Then the opposite axis is the column. The alignItems will then align the children based on the value provided in vertical(column) axis.
Align an item along the cross axisoverwriting his parent alignItem property. alignSelf can apply the property to a single child within a parent element, instead of applying it to all the children.
alignSelfhas the same options and effect as alignItems but instead of affecting the children within a container, we can apply this property to a single child to change its alignment within its parent. alignSelf overrides any option set by the parent with alignItems. The default value is auto.
6. flexWrap
‘wrap’, ‘nowrap’, ‘wrap-reverse’
Controls whether flex items are forced on a single line or can be wrapped on multiple lines. The default value is nowrap.
Defines the distribution of lines along the cross-axis. This only has effect when items are wrapped to multiple lines using flexWrap. The default value is flex-start.
8. position
‘relative’/’absolute’
The position type of an element defines how it is positioned within its parent.
position in React Native is similar to regular CSS, but everything is set to relative by default, so absolute positioning is always relative to the parent.
If we want to position a child using specific numbers of logical pixels relative to its parent, set the child to have absolute position.
If we want to position a child relative to something that is not its parent, don’t use styles for that. Use the component tree.
For example, Think of our container as a line of people. And we are telling each person to stand 5 meters behind the person in front of him (marginTop: 5). If this person is set to relative he will respect the line and will position himself relatively to the person in front of him. If this person is set to absolute he will ignore all of the people in the line and will position himself as if the line was empty, 5 meters from where the line (his parent container) starts.
9. zIndex
zIndex controls which components display on top of others. Normally, we don’t use zIndex. Components render according to their order in the document tree, so later components draw over earlier ones. zIndex may be useful if we have animations or custom modal interfaces where we don’t want this behavior.
It works like the CSS z-index property – components with a larger zIndex will render on top. Think of the z-direction like it’s pointing from the phone into our eyeball.
On iOS, zIndex may require Views to be siblings of each other for it to work as expected.
In the following example the zIndex of the yellow square to 1.
10. Flex Basis, Grow, and Shrink
flexGrow describes how any space within a container should be distributed among its children along the main axis. After laying out its children, a container will distribute any remaining space according to the flex grow values specified by its children. flexGrow accepts any floating point value >= 0, with 0 being the default value. A container will distribute any remaining space among its children weighted by the children’s flexGrow values.
flexShrinkdescribes how to shrink children along the main axis in the case in which the total size of the children overflows the size of the container on the main axis. flexShrink is very similar to flexGrow and can be thought of in the same way if any overflowing size is considered to be negative remaining space. These two properties also work well together by allowing children to grow and shrink as needed. flexShrink accepts any floating point value >= 0, with 1 being the default value. A container will shrink its children weighted by the children’s flexShrink values.
flexBasis is an axis-independent way of providing the default size of an item along the main axis. Setting the flexBasis of a child is similar to setting the width of that child if its parent is a container with flexDirection: row or setting the height of a child if its parent is a container with flexDirection: column. The flexBasis of an item is the default size of that item, the size of the item before any flexGrow and flexShrink calculations are performed.
11. Width and Height
The width property specifies the width of an element’s content area. Similarly, the height property specifies the height of an element’s content area.
Both width and height can take the following values:
auto (default value) React Native calculates the width/height for the element based on its content, whether that is other children, text, or an image.
pixels Defines the width/height in absolute pixels. Depending on other styles set on the component, this may or may not be the final dimension of the node.
percentage Defines the width or height in percentage of its parent’s width or height, respectively.
That’s all about in this article.
Conclusion
In this article, We understood about Flexbox layout in React Native. We also discussed the different basic props of Flexbox layout for styling a component in React Native.
Thanks for reading ! I hope you enjoyed and learned about the Flexbox layout concepts in React Native. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find Other articles of CoolmonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Create Responsive Layouts In React Native ?).
In this article, we will learn how to create Responsive Layouts in React Native. Native application developers put a lot of effort into making engaging and stable apps that are supported on multiple devices. This means that Android developers have to make sure that their apps are supported on hundreds of devices. iOS developers also need to support their apps on a growing number of devices.
React Native enables developers to develop apps that can run both on iOS and Android. The problem is that the number of devices they need to support now is doubled. One particular problem is making the app responsive. There is no such thing as CSS media queries in React Native. React Native developers need to scale all of their layouts to make the app responsive on smartphones, tablets, and other devices.
A famous quote about learning is :
” I am always ready to learn although I do not always like being taught. “
So Let’s begin.
Problems With Responsive Layout in React Native
React Native developers make their apps work on one-dimensional devices as a default. As a result, the app looks distorted on devices with different dimensions because different devices have different pixel ratios. React Native style properties accept either Density-Independent Pixels (DP) or percentage.
Independent Pixels
A density-independent pixel is a unit of length that enables mobile apps to scale across different screen sizes. DPs are not the classic screen pixels. Rather, DPs are mathematically calculated through the following equation: DP=PX/ScaleFactor.
PX is the number of classic screen pixels, and the scaling factor indicates how much the pixels should be enlarged.
React Native developers can scale DP values to screens of different sizes only if they have the same resolution. The problem is that there are hundreds of different devices and most of them have screens with different resolutions.
Percentage
Most web development frameworks use percentage values to design different layouts. However, React Native style properties like border-radius and border-width do not accept percentage value. Properties that do accept percentage include maxWidth, minWidth, margin and height.
Useful Tips for Creating Responsive Layouts for React Native Apps
The following tips will help us develop responsive React Native apps on a massive range of devices.
1. Layout With Flexbox
Components can control layout with a flexbox algorithm. It’s created to keep the proportions and consistency of the layout on different screen sizes.
Flexbox works very similar to CSS on the Web with just a few exceptions which are really easy to learn. When flex prop is a positive number, then components become flexible and will adjust to the screen respective to its flex value. That means that flex equates to flexGrow: [number], flexShrink: 1, flexBasis: 0.
When flex: 0 — it’s sized accordingly to the height and width and is inflexible. If flex is a negative number it also uses height and width but if there is not enough space it will shrink to its minHeight and minWidth.
There are few main properties provided by flexbox, so let’s get through them!
Flex – describes how elements divide space between them. As mentioned above it’s limited to single numbers. If all elements have flex: 1 they will have the same width. In other cases they will split the sum of the flex among themselves.
Flex direction – controls the direction or the main axis of the content layout. You can layout the content in a row or in a column. The default direction is a column because of the nature of mobile device screens.
Justify content – describes the position of content along the main axis. You can align the content to the right-left of the center of the main axis. You can also determine the space between content components.
Align items – aligns content on the cross axis, as opposed to justifyContent that aligns on the main axis.
Flex prop do a really good job of keeping proportions between elements. Regardless of screen size. FlexDirection and justifyContent keep layout behaviour consistent.
There are many more flexbox props. We touched just a few to show how they can be helpful.
2. Aspect Ratio
Another cool prop is aspect ratio which helps keep proportions of our elements under control. Aspect ratio describes the relationship between the width and the height of an image. It is usually expressed as two numbers separated by a colon, like 16:9. Aspect ratio is a non-standard property available only in React Native, and not in CSS. The aspect ratio property controls the size of undefined element dimensions.
For example, we can use the aspectRatio property to adjust images to screen size when our images extend beyond the screen dimensions. We do not need to know the actual width and height of the image, just set the aspect ratio to 1:1. Our images will take all available screen width, without extending beyond the dimensions.
3. Screen Dimensions
It is great when our designs are the same for both platforms, and all types of devices (mobile, tablets, ipads). However, sometimes we have to deal with different layouts for specific screen dimensions or device types.
React Native does not provide properties that can identify the device type or screen size when we work with different layouts. The solution is to use the Dimensions API. The syntax for obtaining the dimensions of a screen is:
Once we obtain the width from supported range screen sizes , we can pick breakpoints from which our layout can change. We can provide different styles to component or hide some parts of the screen. This is similar behaviour to media queries used in CSS.
import { Text, View, Dimensions } from 'react-native';
class App extends PureComponent {
constructor(props) {
super(props);
this.state = {
width: Dimensions.get('window').width
};
}
render() {
return (
<View>
{this.state.width < 320 ? <Text>width of the past</Text> : <Text>how big is big enough?</Text>}
</View>
);
}
}
4. Detect the Platform
Apart from screen size we can also change the layout depending on which platform app is launched. To achieve this , we can use the Platform module.
The Platform module provides select method which can accept any type of args. With this flexibility, we can achieve the same effect as above but cleaner code:
Many apps can work in portrait and landscape mode. If this is the case for our app , we have to ensure that the layout doesn’t break when changing orientation. As we can expect sometimes the layout can change drastically when we flip the device. Our components may need different styling depending on the orientation. Unfortunately, by default rotation of the device does not trigger a re-render. That’s why it has to be handled manually. We already have the knowledge required to build our own and it’s quite easy!
We can also pass getOrientation to onLayout prop exposed by View component. It is fired on every layout change, so it should be used carefully.
If we want to take advantage of the orientation in our styles, remember that it should be inline styles. We already know how to trigger a re-render of the layout when the device is rotated, but the styles are loaded only once. That’s why styles which affect the layout on rotation should be placed inline.
That’s all about in this article.
Conclusion
In this article, We understood how to create Responsive Layouts in React Native. We reviewed responsive layouts challenges in React Native apps and provided solutions for making our responsive layouts much easier. Responsiveness solutions include techniques like Flexbox, dimensions, and aspect ratio properties. In addition, we can detect the device platform and screen orientation to adjust our app to different screen sizes.
Thanks for reading ! I hope you enjoyed and learned about the Responsive Layout challenges and solutions in React Native. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find Other articles of CoolmonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Select The Best Method Of Scheduling Background Runtime In iOS ?).
In this article, We will understand how to select the best method of scheduling background runtime for our app in iOS. Selecting the right strategies for our app in iOS depends on how it functions in the background.
A famous quote about Learning is :
” Change is the end result of all true learning. “
So Let’s begin.
Overview
If our app needs computing resources to complete tasks when it’s not running in the foreground, we can select from a number of strategies to obtain background runtime. Selecting the right strategies for our app depends on how it functions in the background.
Some apps perform work for a short time while in the foreground and must continue uninterrupted if they go to the background. Other apps defer that work to perform in the background at a later time or even at night while the device charges. And some apps need background processing time at varied and unpredictable times, such as when an external event or message arrives.
Different Methods Of Scheduling Background Runtime
In this section, we select one or more methods for our app based on how you schedule activity in the background.
1. Continue Foreground Work in the Background
The system may place apps in the background at any time. If our app performs critical work that must continue while it runs in the background, use beginBackgroundTask(withName:expirationHandler:) to alert the system. Consider this approach if our app needs to finish sending a message or complete saving a file.
The system grants our app a limited amount of time to perform its work once it enters the background. Don’t exceed this time, and use the expiration handler to cover the case where the time has depleted to cancel or defer the work.
Once our work completes, call endBackgroundTask(_:) before the time limit expires so that our app suspends properly. The system terminates our app if we fail to call this method.
If the task is one that takes some time, such as downloading or uploading files, use URLSession.
2. Defer Intensive Work
To preserve battery life and performance, we can schedule backgrounds tasks for periods of low activity, such as overnight when the device charges. Use this approach when our app manages heavy workloads, such as training machine learning models or performing database maintenance.
Schedule these types of background tasks using BGProcessingTask, and the system decides the best time to launch our background task.
3. Update Our App’s Content
Our app may require short bursts of background time to perform content refresh or other work; for example, our app may fetch content from the server periodically, or regularly update its internal state. In this situation, use BGAppRefreshTask by requesting BGAppRefreshTaskRequest.
The system decides the best time to launch our background task, and provides our app up to 30 seconds of background runtime. Complete our work within this time period and call setTaskCompleted(success:), or the system terminates our app.
4. Wake Our App with a Background Push
Background pushes silently wake our app in the background. They don’t display an alert, play a sound, or badge our app’s icon. If our app obtains content from a server infrequently or at irregular intervals, use background pushes to notify our app when new content becomes available. A messaging app with a muted conversation might use a background push solution, and so might an email app that process incoming mail without alerting the user.
When sending a background push, set content-available: to 1 without alert, sound, or badge. The system decides when to launch the app to download the content. To ensure our app launches, set apns-priorityto 5, and apns-push-type to background.
Once the system delivers the remote notification with application(_:didReceiveRemoteNotification:fetchCompletionHandler:), our app has up to 30 seconds to complete its work. One our app performs the work, call the passed completion handler as soon as possible to conserve power. If we send background pushes more frequently than three times per hour, the system imposes rate limitations.
5. Request Background Time and Notify the User
If our app needs to perform a task in the background and show a notification to the user, use a Notification Service Extension. For example, an email app might need to notify a user after downloading a new email. Subclass UNNotificationServiceExtension and bundle the system extension with our app. Upon receiving a push notification, our service extension wakes up and obtains background runtime through didReceive(_:withContentHandler:).
When our extension completes its work, it must call the content handler with the content we want to deliver to the user. Our extension has a limited amount of time to modify the content and execute the contentHandler block.
That’s all about in this article.
Conclusion
In this article, We understood how to select the best method of scheduling background runtime in iOS.
Thanks for reading ! I hope you enjoyed and learned about selecting best method for scheduling background runtime concept in iOS. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find Other articles of CoolmonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Use Sensors In Android ?).
In this article, We will learn how to use Android Sensors. We all must have played some Android games that includes the supports of sensors i.e. by tilting the phone some actions might happen in the game. For example, in the Temple Run game, by tilting the phone to left or right, the position of the runner changes. So, all these games are using the sensors present in your Android device. Other examples can be shaking our phone to lock the screen, finding the direction with the help of a compass, etc. All these are examples of Android sensors.
Use sensors on the device to add rich location and motion capabilities to our app, from GPS or network location to accelerometer, gyroscope, temperature, barometer, and more.
To understand the Android Sensors, we will discuss the below topics :
Overview
Sensor Coordinate System
Categories of Sensors
Android Sensor Framework
Perform Tasks To Use Sensor-Related APIs
Handling Different Sensor Configurations
Best Practices for Accessing and Using Sensors
A famous quote about learning is :
“The more that you read, the more things you will know. The more that you learn, the more places you’ll go.”
So Let’s begin.
Overview
In Android devices, there are various built-in sensors that can be used to measure the orientation, motions, and various other kinds of environmental conditions. In general, there are two types of sensors in Android devices:
Hardware Sensors: Hardware sensors are physical components that are present in Android devices. They can directly measure various properties like field strength, acceleration, etc according to the types of the sensors and after measuring the environment properties they can send the data to Software Sensors.
Software Sensors: Software sensors also know as virtual sensors are those sensors that take the help of one or more Hardware sensors and based on the data collected by various Hardware sensors, they can derive some result.
It is not necessary that all Android devices must have all the sensors. Some devices may have all sensors and some may lack one or two of them. At the same time, a particular device may have more than one sensors of the same type but with different configurations and capabilities.
Sensor Coordinate System
To express data values or to collect data, the sensors in Android devices uses a 3-axis coordinate system i.e. we will be having X, Y, and Z-axis. The following figure depicts the position of various axis used in sensors.
In default orientation, the horizontal axis is represented by X-axis, the vertical axis is represented by Y-axis and the Z-axis points towards the outside of the screen face i.e towards the user. This coordinate system is used by the following sensors:
Acceleration sensor
Gravity sensor
Gyroscope
Linear acceleration sensor
Geomagnetic field sensor
The most important point to understand about this coordinate system is that the axes are not swapped when the device’s screen orientation changes—that is, the sensor’s coordinate system never changes as the device moves. This behavior is the same as the behavior of the OpenGL coordinate system.
Another point to understand is that our application must not assume that a device’s natural (default) orientation is portrait. The natural orientation for many tablet devices is landscape. And the sensor coordinate system is always based on the natural orientation of a device.
Finally, if our application matches sensor data to the on-screen display, we need to use the getRotation() method to determine screen rotation, and then use the remapCoordinateSystem() method to map sensor coordinates to screen coordinates. We need to do this even if our manifest specifies portrait-only display.
Categories of Sensors
Following are the three broad categories of sensors in Android:
Motion Sensors: The sensors that are responsible for measuring or identifying the shakes and tilts of your Android devices are called Motion sensors. These sensors measure the rotational forces along the three-axis. Gravity sensors, accelerometers, etc are some of the examples of Motion sensors.
Position Sensors: As the name suggests, the Position sensors are used to determine the position of an Android device. Magnetometers, Proximity sensors are some of the examples of Position sensors.
Environmental Sensors: Environmental properties like temperature, pressure, humidity, etc are identified with the help of Environmental sensors. Some of the examples of Environmental sensors are thermometer, photometer, barometer, etc.
Android Sensor Framework
Everything related to sensors in Android device is managed or controlled by Android Sensor Framework. By using Android Sensor Framework we can collect raw sensor data. It is a part of android.hardware package and includes various classes and interface:
SensorManager: This is used to get access to various sensors present in the device to use it according to need.
Sensor: This class is used to create an instance of a specific sensor.
SensorEvent: This class is used to find the details of the sensor events.
SensorEventListener: This interface can be used to trigger or perform some action when there is a change in the sensor values.
Following are the usages of the Android Sensor Framework:
You can register or unregister sensor events.
You can collect data from various sensors.
You can find the sensors that are active on a device and determine its capabilities.
Perform Tasks To Use Sensor-Related APIs
In this section, we will see how we can identify various sensors present in a device and how to determine its capabilities. In a typical application we use these sensor-related APIs to perform two basic tasks:
Identifying sensors and sensor capabilities
Monitoring Sensor Events
Identifying sensors and sensor capabilities
Identifying sensors and sensor capabilities at runtime is useful if our application has features that rely on specific sensor types or capabilities. For example, we may want to identify all of the sensors that are present on a device and disable any application features that rely on sensors that are not present. Likewise, we may want to identify all of the sensors of a given type so we can choose the sensor implementation that has the optimum performance for our application.
It is not necessary that two Android devices must have the same number of sensors or the same type of sensors. The availability of sensors varies from device to device and from one Android version to other. So, we can not guarantee that two Android versions or two Android devices must have the same sensors. It becomes a necessary task to identify which sensors are present in a particular Android device.
As seen earlier, we can take the help of the Android Sensor Framework to find the sensors that are present in a particular Android device. Not only that, with the help of various methods of the sensor framework, we can determine the capabilities of a sensor like its resolution, its maximum range, and its power requirements.
Following are the steps that need to be followed to get the list of available sensors in a device:
Create an instance of the SensorManager.
Call the getSystemService() method and pass SENSOR_SERVICE as an argument. This SENSOR_SERVICE is used to retrieve a SensorManager to access sensors.
Call the getSensorList() method to get the names of all the sensors present in the device. The parameter of this method is sensor type. Either we can use TYPE_ALL to get all the sensors available in the device or you can use a particular sensor, for example, TYPE_GRAVITY or TYPE_GYROSCOPE to get the list of sensors of that type only(we can have more than one sensors of the same type).
If we are not using TYPE_ALL i.e. we want to get all the types of sensors of a particular type then we can do so by using the getDefaultSensor() method. This method returns null if there is no sensor of that type in the Android device.
//Step 1
private lateinit var sensorManager: SensorManager
//Step 2
sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
//Step 3
//To get a list of all sensors, use TYPE_ALL
val deviceSensors: List<Sensor> = sensorManager.getSensorList(Sensor.TYPE_ALL)
//Or you can use TYPE_GRAVITY, TYPE_GYROSCOPE or some other sensor
//val deviceSensors: List<Sensor> = sensorManager.getSensorList(Sensor.TYPE_GRAVITY)
//Step 4
if (sensorManager.getDefaultSensor(Sensor.TYPE_GRAVITY) != null) {
//There's a gravity sensor.
} else {
//No gravity sensor.
}
Apart from finding the list of available sensors, we can also check the capability of a particular sensor i.e. we can check the resolution, power, range, etc of a particular sensor.
Sensor.getResolution() //returns a float value which is the resolution of the sensor
Sensor.getMaximumRange() //returns a float value which is the maximum range of the sensor
Sensor.getPower() //returns a float value which is the power in mA used by sensor
Monitoring Sensor Events
Monitoring sensor events is how we acquire raw sensor data. A sensor event occurs every time a sensor detects a change in the parameters it is measuring. A sensor event provides us with four pieces of information: the name of the sensor that triggered the event, the timestamp for the event, the accuracy of the event, and the raw sensor data that triggered the event.
To monitor raw sensor data we need to implement two callback methods that are exposed through the SensorEventListener interface: onAccuracyChanged() and onSensorChanged(). The Android system calls these methods whenever the following occurs:
onAccuracyChanged(): This is called when there is a change in the accuracy of measurement of the sensor. This method will provide the Sensor object that has changed and the new accuracy. There are four statuses of accuracy i.e. SENSOR_STATUS_ACCURACY_LOW, SENSOR_STATUS_ACCURACY_MEDIUM, SENSOR_STATUS_ACCURACY_HIGH, SENSOR_STATUS_UNRELIABLE.
onSensorChanged(): This is called when there is an availability of new sensor data. This method will provide us with a SensorEvent object that contains new sensor data.
class SensorActivity : Activity(), SensorEventListener {
private lateinit var sensorManager: SensorManager
private lateinit var mGravity: Sensor
public override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
//gravity sensor
mGravity = sensorManager.getDefaultSensor(Sensor.TYPE_GRAVITY)
}
override fun onAccuracyChanged(sensor: Sensor, accuracy: Int) {
//If sensor accuracy changes.
}
override fun onSensorChanged(event: SensorEvent) {
//If there is a new sensor data
}
//register
override fun onResume() {
super.onResume()
mGravity?.also { gravity ->
sensorManager.registerListener(this, gravity, SensorManager.SENSOR_DELAY_NORMAL)
}
}
//unregister
override fun onPause() {
super.onPause()
sensorManager.unregisterListener(this)
}
}
In this example, the default data delay (SENSOR_DELAY_NORMAL) is specified when the registerListener() method is invoked. The data delay (or sampling rate) controls the interval at which sensor events are sent to our application via the onSensorChanged() callback method. The default data delay is suitable for monitoring typical screen orientation changes and uses a delay of 200,000 microseconds. We can specify other data delays, such as SENSOR_DELAY_GAME (20,000 microsecond delay), SENSOR_DELAY_UI (60,000 microsecond delay), or SENSOR_DELAY_FASTEST (0 microsecond delay). As of Android 3.0 (API Level 11) we can also specify the delay as an absolute value (in microseconds).
The delay that we specify is only a suggested delay. The Android system and other applications can alter this delay. As a best practice, we should specify the largest delay that we can because the system typically uses a smaller delay than the one we specify (that is, we should choose the slowest sampling rate that still meets the needs of our application). Using a larger delay imposes a lower load on the processor and therefore uses less power.
There is no public method for determining the rate at which the sensor framework is sending sensor events to our application; however, we can use the timestamps that are associated with each sensor event to calculate the sampling rate over several events. We should not have to change the sampling rate (delay) once we set it. If for some reason we do need to change the delay, we will have to unregister and reregister the sensor listener.
It’s also important to note that this example uses the onResume() and onPause() callback methods to register and unregister the sensor event listener. As a best practice we should always disable sensors we don’t need, especially when our activity is paused. Failing to do so can drain the battery in just a few hours because some sensors have substantial power requirements and can use up battery power quickly. The system will not disable sensors automatically when the screen turns off.
Handling Different Sensor Configurations
Android does not specify a standard sensor configuration for devices, which means device manufacturers can incorporate any sensor configuration that they want into their Android-powered devices. As a result, devices can include a variety of sensors in a wide range of configurations. If our application relies on a specific type of sensor, we have to ensure that the sensor is present on a device so our app can run successfully.
We have two options for ensuring that a given sensor is present on a device:
Detect sensors at runtime and enable or disable application features as appropriate.
Use Google Play filters to target devices with specific sensor configurations.
Detecting sensors at runtime
If our application uses a specific type of sensor, but doesn’t rely on it, we can use the sensor framework to detect the sensor at runtime and then disable or enable application features as appropriate. For example, a navigation application might use the temperature sensor, pressure sensor, GPS sensor, and geomagnetic field sensor to display the temperature, barometric pressure, location, and compass bearing. If a device doesn’t have a pressure sensor, we can use the sensor framework to detect the absence of the pressure sensor at runtime and then disable the portion of our application’s UI that displays pressure. For example, the following code checks whether there’s a pressure sensor on a device:
private lateinit var sensorManager: SensorManager
...
sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager
if (sensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE) != null) {
// Success! There's a pressure sensor.
} else {
// Failure! No pressure sensor.
}
Using Google Play filters to target specific sensor configurations
If we are publishing our application on Google Play we can use the <uses-feature>element in our manifest file to filter our application from devices that do not have the appropriate sensor configuration for our application. The <uses-feature> element has several hardware descriptors that let we filter applications based on the presence of specific sensors. The sensors we can list include: accelerometer, barometer, compass (geomagnetic field), gyroscope, light, and proximity. The following is an example manifest entry that filters apps that do not have an accelerometer:
If we add this element and descriptor to our application’s manifest, users will see our application on Google Play only if their device has an accelerometer.
We should set the descriptor to android:required="true" only if our application relies entirely on a specific sensor. If our application uses a sensor for some functionality, but still runs without the sensor, we should list the sensor in the <uses-feature> element, but set the descriptor to android:required="false". This helps ensure that devices can install our app even if they do not have that particular sensor. This is also a project management best practice that helps us keep track of the features our application uses. Keep in mind, if our application uses a particular sensor, but still runs without the sensor, then we should detect the sensor at runtime and disable or enable application features as appropriate.
Best Practices for Accessing and Using Sensors
As we design our sensor implementation, be sure to follow the guidelines that are discussed in this section. These guidelines are recommended best practices for anyone who is using the sensor framework to access sensors and acquire sensor data.
1. Only gather sensor data in the foreground
On devices running Android 9 (API level 28) or higher, apps running in the background have the following restrictions:
Sensors that use the continuous reporting mode, such as accelerometers and gyroscopes, don’t receive events.
Sensors that use the on-change or one-shot reporting modes don’t receive events.
Given these restrictions, it’s best to detect sensor events either when your app is in the foreground or as part of a foreground service.
2. Unregister sensor listeners
Be sure to unregister a sensor’s listener when we are done using the sensor or when the sensor activity pauses. If a sensor listener is registered and its activity is paused, the sensor will continue to acquire data and use battery resources unless we unregister the sensor. The following code shows how to use the onPause() method to unregister a listener:
private lateinit var sensorManager: SensorManager
...
override fun onPause() {
super.onPause()
sensorManager.unregisterListener(this)
}
3. Test with the Android Emulator
The Android Emulator includes a set of virtual sensor controls that allow you to test sensors such as accelerometer, ambient temperature, magnetometer, proximity, light, and more.
The emulator uses a connection with an Android device that is running the SdkControllerSensor app. Note that this app is available only on devices running Android 4.0 (API level 14) or higher. (If the device is running Android 4.0, it must have Revision 2 installed.) The SdkControllerSensor app monitors changes in the sensors on the device and transmits them to the emulator. The emulator is then transformed based on the new values that it receives from the sensors on our device.
4. Don’t block the onSensorChanged() method
Sensor data can change at a high rate, which means the system may call the onSensorChanged(SensorEvent) method quite often. As a best practice, we should do as little as possible within the onSensorChanged(SensorEvent) method so we don’t block it. If our application requires us to do any data filtering or reduction of sensor data, we should perform that work outside of the onSensorChanged(SensorEvent) method.
5. Avoid using deprecated methods or sensor types
Several methods and constants have been deprecated. In particular, the TYPE_ORIENTATION sensor type has been deprecated. To get orientation data we should use the getOrientation() method instead. Likewise, the TYPE_TEMPERATURE sensor type has been deprecated. We should use the TYPE_AMBIENT_TEMPERATUREsensor type instead on devices that are running Android 4.0.
6. Verify sensors before we use them
Always verify that a sensor exists on a device before we attempt to acquire data from it. Don’t assume that a sensor exists simply because it’s a frequently-used sensor. Device manufacturers are not required to provide any particular sensors in their devices.
7. Choose sensor delays carefully
When we register a sensor with the registerListener() method, be sure we choose a delivery rate that is suitable for our application or use-case. Sensors can provide data at very high rates. Allowing the system to send extra data that we don’t need wastes system resources and uses battery power.
That’s all about in this article.
Conclusion
In this article, we learned about how to use Android Sensors. We learned about the Hardware and the Software sensors. We saw how the Android Sensor Framework can be used to determine the sensors present in the Android device. At last, we saw how to use Sensor Event Listener.
Thanks for reading ! I hope you enjoyed and learned about Sensors Concept in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (Understanding App Architecture in Android).
In this article, We will learn about App Architecture in Android. We will also discuss about best practices and recommended architecture for building robust, production-quality apps.
A famous quote about learning is :
” The beautiful thing about learning is that nobody can take it away from you.”
So Let’s begin.
Mobile App User Experiences
In the majority of cases, desktop apps have a single entry point from a desktop or program launcher, then run as a single, monolithic process. Android apps, on the other hand, have a much more complex structure. A typical Android app contains multiple app components, including activities, fragments, services, content providers, and broadcast receivers.
We declare most of these app components in our app manifest. The Android OS then uses this file to decide how to integrate our app into the device’s overall user experience. Given that a properly-written Android app contains multiple components and that users often interact with multiple apps in a short period of time, apps need to adapt to different kinds of user-driven workflows and tasks.
For example, consider what happens when we share a photo in our favorite social networking app:
The app triggers a camera intent. The Android OS then launches a camera app to handle the request. At this point, the user has left the social networking app, but their experience is still seamless.
The camera app might trigger other intents, like launching the file chooser, which may launch yet another app.
Eventually, the user returns to the social networking app and shares the photo.
At any point during the process, the user could be interrupted by a phone call or notification. After acting upon this interruption, the user expects to be able to return to, and resume, this photo-sharing process. This app-hopping behavior is common on mobile devices, so our app must handle these flows correctly.
Keep in mind that mobile devices are also resource-constrained, so at any time, the operating system might kill some app processes to make room for new ones.
Given the conditions of this environment, it’s possible for our app components to be launched individually and out-of-order, and the operating system or user can destroy them at any time. Because these events aren’t under our control, we shouldn’t store any app data or state in our app components, and our app components shouldn’t depend on each other.
Common Architectural Principles
If we shouldn’t use app components to store app data and state, how should we design our app?
Separation of concerns
The most important principle to follow is separation of concerns. It’s a common mistake to write all our code in an Activity or a Fragment. These UI-based classes should only contain logic that handles UI and operating system interactions. By keeping these classes as lean as possible, we can avoid many lifecycle-related problems.
Keep in mind that we don’t own implementations of Activity and Fragment; rather, these are just glue classes that represent the contract between the Android OS and our app. The OS can destroy them at any time based on user interactions or because of system conditions like low memory. To provide a satisfactory user experience and a more manageable app maintenance experience, it’s best to minimize our dependency on them.
Drive UI from a model
Another important principle is that you should drive your UI from a model, preferably a persistent model. Models are components that are responsible for handling the data for an app. They’re independent from the View objects and app components in our app, so they’re unaffected by the app’s lifecycle and the associated concerns.
Persistence is ideal for the following reasons:
Our users don’t lose data if the Android OS destroys our app to free up resources.
Our app continues to work in cases when a network connection is flaky or not available.
By basing our app on model classes with the well-defined responsibility of managing the data, our app is more testable and consistent.
Recommended App Architecture
In this section, we demonstrate how to structure an app using Architecture Components by working through an end-to-end use case.
Imagine we’re building a UI that shows a user profile. We use a private backend and a REST API to fetch the data for a given profile.
Overview
To start, consider the following diagram, which shows how all the modules should interact with one another after designing the app:
Notice that each component depends only on the component one level below it. For example, activities and fragments depend only on a view model. The repository is the only class that depends on multiple other classes; in this example, the repository depends on a persistent data model and a remote backend data source.
This design creates a consistent and pleasant user experience. Regardless of whether the user comes back to the app several minutes after they’ve last closed it or several days later, they instantly see a user’s information that the app persists locally. If this data is stale, the app’s repository module starts updating the data in the background.
Build The User Interface
The UI consists of a fragment, UserProfileFragment, and its corresponding layout file, user_profile_layout.xml.
To drive the UI, our data model needs to hold the following data elements:
User ID: The identifier for the user. It’s best to pass this information into the fragment using the fragment arguments. If the Android OS destroys our process, this information is preserved, so the ID is available the next time our app is restarted.
User object: A data class that holds details about the user.
We use a UserProfileViewModel, based on the ViewModel architecture component, to keep this information.
A ViewModel object provides the data for a specific UI component, such as a fragment or activity, and contains data-handling business logic to communicate with the model. For example, the ViewModel can call other components to load the data, and it can forward user requests to modify the data. The ViewModel doesn’t know about UI components, so it isn’t affected by configuration changes, such as recreating an activity when rotating the device.
We’ve now defined the following files:
user_profile.xml: The UI layout definition for the screen.
UserProfileFragment: The UI controller that displays the data.
UserProfileViewModel: The class that prepares the data for viewing in the UserProfileFragment and reacts to user interactions.
The following code snippets show the starting contents for these files. (The layout file is omitted for simplicity.)
UserProfileViewModel
class UserProfileViewModel : ViewModel() {
val userId : String = TODO()
val user : User = TODO()
}
UserProfileFragment
class UserProfileFragment : Fragment() {
// To use the viewModels() extension function, include
// "androidx.fragment:fragment-ktx:latest-version" in your app
// module's build.gradle file.
private val viewModel: UserProfileViewModel by viewModels()
override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?
): View {
return inflater.inflate(R.layout.main_fragment, container, false)
}
}
Now that we have these code modules, how do we connect them? After all, when the user field is set in the UserProfileViewModel class, we need a way to inform the UI.
To obtain the user, our ViewModel needs to access the Fragment arguments. We can either pass them from the Fragment, or better, using the SavedState module, we can make our ViewModel read the argument directly:
// UserProfileViewModel
class UserProfileViewModel(
savedStateHandle: SavedStateHandle
) : ViewModel() {
val userId : String = savedStateHandle["uid"] ?:
throw IllegalArgumentException("missing user id")
val user : User = TODO()
}
// UserProfileFragment
private val viewModel: UserProfileViewModel by viewModels(
factoryProducer = { SavedStateVMFactory(this) }
...
)
Here, SavedStateHandle allows ViewModel to access the saved state and arguments of the associated Fragment or Activity.
Now we need to inform our Fragment when the user object is obtained. This is where the LiveData architecture component comes in.
LiveData is an observable data holder. Other components in our app can monitor changes to objects using this holder without creating explicit and rigid dependency paths between them. The LiveData component also respects the lifecycle state of our app’s components—such as activities, fragments, and services—and includes cleanup logic to prevent object leaking and excessive memory consumption.
If we’re already using a library like RxJava, we can continue using them instead of LiveData. When we use libraries and approaches like these, however, make sure we handle our app’s lifecycle properly. In particular, make sure to pause our data streams when the related LifecycleOwner is stopped and to destroy these streams when the related LifecycleOwner is destroyed. We can also add the android.arch.lifecycle:reactivestreams artifact to use LiveData with another reactive streams library, such as RxJava2.
To incorporate the LiveData component into our app, we change the field type in the UserProfileViewModel to LiveData<User>. Now, the UserProfileFragment is informed when the data is updated. Furthermore, because this LiveData field is lifecycle aware, it automatically cleans up references after they’re no longer needed.
UserProfileViewModel
class UserProfileViewModel(
savedStateHandle: SavedStateHandle
) : ViewModel() {
val userId : String = savedStateHandle["uid"] ?:
throw IllegalArgumentException("missing user id")
val user : LiveData<User> = TODO()
}
Now we modify UserProfileFragment to observe the data and update the UI:
Every time the user profile data is updated, the onChanged() callback is invoked, and the UI is refreshed.
If we’re familiar with other libraries where observable callbacks are used, we might have realized that we didn’t override the fragment’s onStop() method to stop observing the data. This step isn’t necessary with LiveData because it’s lifecycle aware, which means it doesn’t invoke the onChanged() callback unless the fragment is in an active state; that is, it has received onStart() but hasn’t yet received onStop()). LiveData also automatically removes the observer when the fragment’s onDestroy() method is called.
We also didn’t add any logic to handle configuration changes, such as the user rotating the device’s screen. The UserProfileViewModel is automatically restored when the configuration changes, so as soon as the new fragment is created, it receives the same instance of ViewModel, and the callback is invoked immediately using the current data. Given that ViewModel objects are intended to outlast the corresponding View objects that they update, we shouldn’t include direct references to View objects within your implementation of ViewModel.
Fetch Data
Now that we’ve used LiveData to connect the UserProfileViewModel to the UserProfileFragment, how can we fetch the user profile data?
For this example, we assume that our backend provides a REST API. We use the Retrofit library to access our backend, though we are free to use a different library that serves the same purpose.
Here’s our definition of Webservice that communicates with our backend:
Webservice
interface Webservice {
/**
* @GET declares an HTTP GET request
* @Path("user") annotation on the userId parameter marks it as a
* replacement for the {user} placeholder in the @GET path
*/
@GET("/users/{user}")
fun getUser(@Path("user") userId: String): Call<User>
}
A first idea for implementing the ViewModel might involve directly calling the Webservice to fetch the data and assign this data to our LiveData object. This design works, but by using it, our app becomes more and more difficult to maintain as it grows. It gives too much responsibility to the UserProfileViewModel class, which violates the separation of concerns principle. Additionally, the scope of a ViewModel is tied to an Activity or Fragment lifecycle, which means that the data from the Webservice is lost when the associated UI object’s lifecycle ends. This behavior creates an undesirable user experience.
Instead, our ViewModel delegates the data-fetching process to a new module, a repository.
Repository modules handle data operations. They provide a clean API so that the rest of the app can retrieve this data easily. They know where to get the data from and what API calls to make when data is updated. We can consider repositories to be mediators between different data sources, such as persistent models, web services, and caches.
Our UserRepository class, shown in the following code snippet, uses an instance of WebService to fetch a user’s data:
UserRepository
class UserRepository {
private val webservice: Webservice = TODO()
// ...
fun getUser(userId: String): LiveData<User> {
// This isn't an optimal implementation. We'll fix it later.
val data = MutableLiveData<User>()
webservice.getUser(userId).enqueue(object : Callback<User> {
override fun onResponse(call: Call<User>, response: Response<User>) {
data.value = response.body()
}
// Error case is left out for brevity.
override fun onFailure(call: Call<User>, t: Throwable) {
TODO()
}
})
return data
}
}
Even though the repository module looks unnecessary, it serves an important purpose: it abstracts the data sources from the rest of the app. Now, our UserProfileViewModel doesn’t know how the data is fetched, so we can provide the view model with data obtained from several different data-fetching implementations.
Manage dependencies between components
The UserRepository class above needs an instance of Webservice to fetch the user’s data. It could simply create the instance, but to do that, it also needs to know the dependencies of the Webservice class. Additionally, UserRepository is probably not the only class that needs a Webservice. This situation requires us to duplicate code, as each class that needs a reference to Webservice needs to know how to construct it and its dependencies. If each class creates a new WebService, our app could become very resource heavy.
We can use the following design patterns to address this problem:
Dependency injection (DI): Dependency injection allows classes to define their dependencies without constructing them. At runtime, another class is responsible for providing these dependencies. We recommend the Dagger 2 library for implementing dependency injection in Android apps. Dagger 2 automatically constructs objects by walking the dependency tree, and it provides compile-time guarantees on dependencies.
Service locator: The service locator pattern provides a registry where classes can obtain their dependencies instead of constructing them.
It’s easier to implement a service registry than use DI, so if we aren’t familiar with DI, use the service locator pattern instead.
These patterns allow us to scale our code because they provide clear patterns for managing dependencies without duplicating code or adding complexity. Furthermore, these patterns allow us to quickly switch between test and production data-fetching implementations.
Connect ViewModel and the repository
Now, we modify our UserProfileViewModel to use the UserRepository object:
UserProfileViewModel
class UserProfileViewModel @Inject constructor(
savedStateHandle: SavedStateHandle,
userRepository: UserRepository
) : ViewModel() {
val userId : String = savedStateHandle["uid"] ?:
throw IllegalArgumentException("missing user id")
val user : LiveData<User> = userRepository.getUser(userId)
}
Cache Data
The UserRepository implementation abstracts the call to the Webservice object, but because it relies on only one data source, it’s not very flexible.
The key problem with the UserRepository implementation is that after it fetches data from our backend, it doesn’t store that data anywhere. Therefore, if the user leaves the UserProfileFragment, then returns to it, our app must re-fetch the data, even if it hasn’t changed.
This design is suboptimal for the following reasons:
It wastes valuable network bandwidth.
It forces the user to wait for the new query to complete.
To address these shortcomings, we add a new data source to our UserRepository, which caches the User objects in memory:
UserRepository
// Informs Dagger that this class should be constructed only once.
@Singleton
class UserRepository @Inject constructor(
private val webservice: Webservice,
// Simple in-memory cache. Details omitted for brevity.
private val userCache: UserCache
) {
fun getUser(userId: String): LiveData<User> {
val cached : LiveData<User> = userCache.get(userId)
if (cached != null) {
return cached
}
val data = MutableLiveData<User>()
// The LiveData object is currently empty, but it's okay to add it to the
// cache here because it will pick up the correct data once the query
// completes.
userCache.put(userId, data)
// This implementation is still suboptimal but better than before.
// A complete implementation also handles error cases.
webservice.getUser(userId).enqueue(object : Callback<User> {
override fun onResponse(call: Call<User>, response: Response<User>) {
data.value = response.body()
}
// Error case is left out for brevity.
override fun onFailure(call: Call<User>, t: Throwable) {
TODO()
}
})
return data
}
}
Persist Data
Using our current implementation, if the user rotates the device or leaves and immediately returns to the app, the existing UI becomes visible instantly because the repository retrieves data from our in-memory cache.
However, what happens if the user leaves the app and comes back hours later, after the Android OS has killed the process? By relying on our current implementation in this situation, we need to fetch the data again from the network. This refetching process isn’t just a bad user experience; it’s also wasteful because it consumes valuable mobile data.
We could fix this issue by caching the web requests, but that creates a key new problem: What happens if the same user data shows up from another type of request, such as fetching a list of friends? The app would show inconsistent data, which is confusing at best. For example, our app might show two different versions of the same user’s data if the user made the list-of-friends request and the single-user request at different times. Our app would need to figure out how to merge this inconsistent data.
The proper way to handle this situation is to use a persistent model. This is where the Room persistence library comes to the rescue.
Room is an object-mapping library that provides local data persistence with minimal boilerplate code. At compile time, it validates each query against your data schema, so broken SQL queries result in compile-time errors instead of runtime failures. Room abstracts away some of the underlying implementation details of working with raw SQL tables and queries. It also allows you to observe changes to the database’s data, including collections and join queries, exposing such changes using LiveData objects. It even explicitly defines execution constraints that address common threading issues, such as accessing storage on the main thread.
To use Room, we need to define our local schema. First, we add the @Entity annotation to our User data model class and a @PrimaryKey annotation to the class’s id field. These annotations mark User as a table in our database and id as the table’s primary key:
User
@Entity
data class User(
@PrimaryKey private val id: String,
private val name: String,
private val lastName: String
)
Then, we create a database class by implementing RoomDatabase for our app:
UserDatabase
@Database(entities = [User::class], version = 1)
abstract class UserDatabase : RoomDatabase()
Notice that UserDatabase is abstract. Room automatically provides an implementation of it.
We now need a way to insert user data into the database. For this task, we create a data access object (DAO).
UserDao
@Dao
interface UserDao {
@Insert(onConflict = REPLACE)
fun save(user: User)
@Query("SELECT * FROM user WHERE id = :userId")
fun load(userId: String): LiveData<User>
}
Notice that the load method returns an object of type LiveData<User>. Room knows when the database is modified and automatically notifies all active observers when the data changes. Because Room uses LiveData, this operation is efficient; it updates the data only when there is at least one active observer.
Room checks invalidations based on table modifications, which means it may dispatch false positive notifications.
With our UserDao class defined, we then reference the DAO from our database class:
UserDatabase
@Database(entities = [User::class], version = 1)
abstract class UserDatabase : RoomDatabase() {
abstract fun userDao(): UserDao
}
Now we can modify our UserRepository to incorporate the Room data source:
// Informs Dagger that this class should be constructed only once.
@Singleton
class UserRepository @Inject constructor(
private val webservice: Webservice,
// Simple in-memory cache. Details omitted for brevity.
private val executor: Executor,
private val userDao: UserDao
) {
fun getUser(userId: String): LiveData<User> {
refreshUser(userId)
// Returns a LiveData object directly from the database.
return userDao.load(userId)
}
private fun refreshUser(userId: String) {
// Runs in a background thread.
executor.execute {
// Check if user data was fetched recently.
val userExists = userDao.hasUser(FRESH_TIMEOUT)
if (!userExists) {
// Refreshes the data.
val response = webservice.getUser(userId).execute()
// Check for errors here.
// Updates the database. The LiveData object automatically
// refreshes, so we don't need to do anything else here.
userDao.save(response.body()!!)
}
}
}
companion object {
val FRESH_TIMEOUT = TimeUnit.DAYS.toMillis(1)
}
}
Notice that even though we changed where the data comes from in UserRepository, we didn’t need to change our UserProfileViewModel or UserProfileFragment. This small-scoped update demonstrates the flexibility that our app’s architecture provides. It’s also great for testing, because we can provide a fake UserRepository and test our production UserProfileViewModel at the same time.
If users wait a few days before returning to an app that uses this architecture, it’s likely that they’ll see out-of-date information until the repository can fetch updated information. Depending on our use case, we may not want to show this out-of-date information. Instead, we can display placeholder data, which shows example values and indicates that our app is currently fetching and loading up-to-date information.
Single source of truth
It’s common for different REST API endpoints to return the same data. For example, if our backend has another endpoint that returns a list of friends, the same user object could come from two different API endpoints, maybe even using different levels of granularity. If the UserRepository were to return the response from the Webservice request as-is, without checking for consistency, our UIs could show confusing information because the version and format of data from the repository would depend on the endpoint most recently called.
For this reason, our UserRepository implementation saves web service responses into the database. Changes to the database then trigger callbacks on active LiveData objects. Using this model, the database serves as the single source of truth, and other parts of the app access it using our UserRepository. Regardless of whether we use a disk cache, we recommend that our repository designate a data source as the single source of truth for the rest of your app.
Show in-progress operations
In some use cases, such as pull-to-refresh, it’s important for the UI to show the user that there’s currently a network operation in progress. It’s good practice to separate the UI action from the actual data because the data might be updated for various reasons. For example, if we fetched a list of friends, the same user might be fetched again programmatically, triggering a LiveData<User> update. From the UI’s perspective, the fact that there’s a request in flight is just another data point, similar to any other piece of data in the User object itself.
We can use one of the following strategies to display a consistent data-updating status in the UI, regardless of where the request to update the data came from:
Change getUser() to return an object of type LiveData. This object would include the status of the network operation. For an example, see the NetworkBoundResource implementation in the android-architecture-components GitHub project.
Provide another public function in the UserRepository class that can return the refresh status of the User. This option is better if you want to show the network status in your UI only when the data-fetching process originated from an explicit user action, such as pull-to-refresh.
Test Each Component
In the separation of concerns section, we mentioned that one key benefit of following this principle is testability.
The following list shows how to test each code module from our extended example:
User interface and interactions: Use an Android UI instrumentation test. The best way to create this test is to use the Espresso library. We can create the fragment and provide it a mock UserProfileViewModel. Because the fragment communicates only with the UserProfileViewModel, mocking this one class is sufficient to fully test your app’s UI.
ViewModel: We can test the UserProfileViewModel class using a JUnit test. We only need to mock one class, UserRepository.
UserRepository: We can test the UserRepository using a JUnit test, as well. We need to mock the Webserviceand the UserDao. In these tests, verify the following behavior:
The repository makes the correct web service calls.
It saves results into the database.
The repository doesn’t make unnecessary requests if the data is cached and up to date.
Because both Webservice and UserDao are interfaces, we can mock them or create fake implementations for more complex test cases.
UserDao: Test DAO classes using instrumentation tests. Because these instrumentation tests don’t require any UI components, they run quickly. For each test, create an in-memory database to ensure that the test doesn’t have any side effects, such as changing the database files on disk.
Webservice: In these tests, avoid making network calls to our backend. It’s important for all tests, especially web-based ones, to be independent from the outside world. Several libraries, including MockWebServer, can help we create a fake local server for these tests.
Testing Artifacts: Architecture Components provides a maven artifact to control its background threads. The androidx.arch.core:core-testing artifact contains the following JUnit rules:
InstantTaskExecutorRule: Use this rule to instantly execute any background operation on the calling thread.
CountingTaskExecutorRule: Use this rule to wait on background operations of Architecture Components. You can also associate this rule with Espresso as an idling resource.
Best Practices
Programming is a creative field, and building Android apps isn’t an exception. There are many ways to solve a problem, be it communicating data between multiple activities or fragments, retrieving remote data and persisting it locally for offline mode, or any number of other common scenarios that nontrivial apps encounter.
Although the following recommendations aren’t mandatory, it has been our experience that following them makes your code base more robust, testable, and maintainable in the long run:
1. Avoid designating our app’s entry points—such as activities, services, and broadcast receivers—as sources of data.
Instead, they should only coordinate with other components to retrieve the subset of data that is relevant to that entry point. Each app component is rather short-lived, depending on the user’s interaction with their device and the overall current health of the system.
2. Create well-defined boundaries of responsibility between various modules of our app.
For example, don’t spread the code that loads data from the network across multiple classes or packages in your code base. Similarly, don’t define multiple unrelated responsibilities—such as data caching and data binding—into the same class.
3. Expose as little as possible from each module.
Don’t be tempted to create “just that one” shortcut that exposes an internal implementation detail from one module. We might gain a bit of time in the short term, but we then incur technical debt many times over as our codebase evolves.
4. Consider how to make each module testable in isolation.
For example, having a well-defined API for fetching data from the network makes it easier to test the module that persists that data in a local database. If, instead, we mix the logic from these two modules in one place, or distribute our networking code across our entire code base, it becomes much more difficult—if not impossible—to test.
5. Focus on the unique core of our app so it stands out from other apps.
Don’t reinvent the wheel by writing the same boilerplate code again and again. Instead, focus our time and energy on what makes our app unique, and let the Android Architecture Components and other recommended libraries handle the repetitive boilerplate.
6. Persist as much relevant and fresh data as possible.
That way, users can enjoy our app’s functionality even when their device is in offline mode. Remember that not all of our users enjoy constant, high-speed connectivity.
7. Assign one data source to be the single source of truth.
Whenever our app needs to access this piece of data, it should always originate from this single source of truth.
Exposing Network Status
In this section, we demonstrates how to expose network status using a Resource class that encapsulate both the data and its state.
The following code snippet provides a sample implementation of Resource:
Resource
// A generic class that contains data and status about loading this data.
sealed class Resource<T>(
val data: T? = null,
val message: String? = null
) {
class Success<T>(data: T) : Resource<T>(data)
class Loading<T>(data: T? = null) : Resource<T>(data)
class Error<T>(message: String, data: T? = null) : Resource<T>(data, message)
}
Because it’s common to load data from the network while showing the disk copy of that data, it’s good to create a helper class that we can reuse in multiple places. For this example, we create a class called NetworkBoundResource.
The following diagram shows the decision tree for NetworkBoundResource:
It starts by observing the database for the resource. When the entry is loaded from the database for the first time, NetworkBoundResource checks whether the result is good enough to be dispatched or that it should be re-fetched from the network. Note that both of these situations can happen at the same time, given that we probably want to show cached data while updating it from the network.
If the network call completes successfully, it saves the response into the database and re-initializes the stream. If network request fails, the NetworkBoundResource dispatches a failure directly.
The following code snippet shows the public API provided by NetworkBoundResource class for its subclasses:
NetworkBoundResource.kt
// ResultType: Type for the Resource data.
// RequestType: Type for the API response.
abstract class NetworkBoundResource<ResultType, RequestType> {
// Called to save the result of the API response into the database
@WorkerThread
protected abstract fun saveCallResult(item: RequestType)
// Called with the data in the database to decide whether to fetch
// potentially updated data from the network.
@MainThread
protected abstract fun shouldFetch(data: ResultType?): Boolean
// Called to get the cached data from the database.
@MainThread
protected abstract fun loadFromDb(): LiveData<ResultType>
// Called to create the API call.
@MainThread
protected abstract fun createCall(): LiveData<ApiResponse<RequestType>>
// Called when the fetch fails. The child class may want to reset components
// like rate limiter.
protected open fun onFetchFailed() {}
// Returns a LiveData object that represents the resource that's implemented
// in the base class.
fun asLiveData(): LiveData<ResultType> = TODO()
}
Note these important details about the class’s definition:
It defines two type parameters, ResultType and RequestType, because the data type returned from the API might not match the data type used locally.
It uses a class called ApiResponse for network requests. ApiResponse is a simple wrapper around the Retrofit2.Call class that convert responses to instances of LiveData.
After creating the NetworkBoundResource, we can use it to write our disk- and network-bound implementations of User in the UserRepository class:
UserRepository
// Informs Dagger that this class should be constructed only once.
@Singleton
class UserRepository @Inject constructor(
private val webservice: Webservice,
private val userDao: UserDao
) {
fun getUser(userId: String): LiveData<User> {
return object : NetworkBoundResource<User, User>() {
override fun saveCallResult(item: User) {
userDao.save(item)
}
override fun shouldFetch(data: User?): Boolean {
return rateLimiter.canFetch(userId) && (data == null || !isFresh(data))
}
override fun loadFromDb(): LiveData<User> {
return userDao.load(userId)
}
override fun createCall(): LiveData<ApiResponse<User>> {
return webservice.getUser(userId)
}
}.asLiveData()
}
}
That’s all about in this article.
Conclusion
In this article, we learned about best practices and recommended architecture for building robust, production-quality apps in Android.
Thanks for reading ! I hope you enjoyed and learned about App Architecture Concept in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, we will learn about an overview of navigation component in Android. We will discuss about navigation graph and how to pass arguments safely.
The Navigation Architecture Component simplifies implementing navigation, while also helping you visualize our app’s navigation flow. The library provides a number of benefits, including:
Automatic handling of fragment transactions
Correctly handling up and back actions by default
Default behaviors for animations and transitions
Deep linking as a first-class operation
Implementing navigation UI patterns (like navigation drawers and bottom nav) with little additional work
Type safety when passing information while navigating
Android Studio tooling for visualizing and editing the navigation flow of an app
The Navigation component requires Android Studio 3.3 or higher and is dependent on Java 8 language features.
A famous quote about learning is :
” I am still learning.”
So Let’s begin.
Overview
The Navigation Component consists of three key parts:
Navigation Graph (New XML resource) — This is a resource that contains all navigation-related information in one centralized location. This includes all the places in our app, known as destinations, and possible paths a user could take through our app.
NavHostFragment (Layout XML view) — This is a special widget you add to our layout. It displays different destinations from our Navigation Graph.
NavController (Kotlin/Java object) — This is an object that keeps track of the current position within the navigation graph. It orchestrates swapping destination content in the NavHostFragment as we move through a navigation graph.
Navigation Component Integration
Just include the following code in the dependencies block of our module-level build.gradle file:
First, we will create a file that will contain our navigation graph. In the res, directory create a new android resource file as follows:
This will create an empty resource file named nav_graph.xml under the navigation directory.
For example, we have two fragments named FirstFragment and SecondFragment. FirstFragment has a button on click of which we will navigate to the SecondFragment.
We define these fragments in the navigation graph as below:
Here the root tag named navigation has a parameter called app:startDestination which has the id of our first fragment. This defines that the first fragment will be loaded in the NavHostFragment automatically.
The Navigation Component introduces the concept of a destination. A destination is any place you can navigate to in your app, usually a fragment or an activity. These are supported out of the box, but we can also make our own custom destination types if needed.
Notice that for first fragment we have defined an action with the following attributes:
Each action should have a unique id which we will use to navigate to the required destination.
Here the destination points to the id of the second fragment defined in the nav graph, which means that with this action we will navigate to the second fragment.
After this step when we open the nav_graph.xml and switch to the design tab, it should look like the following:
Navigation Types
With the navigation component, we have multiple ways to navigate :
1. Navigation using destination Id
We can provide the id of the destination fragment to navigate to it, like the following:
The last piece required is to define the NavHostFragment. It is a special widget that will display the different destinations defined in the nav graph. Copy the following code and paste it in the layout of the activity in which we want to load our FirstFragment.
android:name="androidx.navigation.fragment.NavHostFragment" defines the NavHostFragment used by NavController
app:defaultNavHost="true" is simply stating that you want this to be the NavHost that intercepts and works as the back button on our device.
app:navGraph="@navigation/app_navigation" associates the NavHostFragment with a navigation graph. This navigation graph specifies all the destinations the user can navigate to, in this NavHostFragment.
After these steps when we run the app, FirstFragment should be loaded automatically and when we click the button it should open the SecondFragment. Also when we press the back button, it should navigate back to the FirstFragment.
Safe Arguments
The navigation component has a Gradle plugin, called safe args, that generates simple object and builder classes for type-safe access to arguments specified for destinations and actions.
Safe args allows getting rid of the code like below:
val username = arguments?.getString("usernameKey")
with the following:
val username = args.username
Safe ArgumentsIntegration
Add the following code in the top-level Gradle file:
Now the safe args plugin is active in our project. We will add 2 arguments to be passed to the SecondFragment from the FirstFragment. We will define arguments in the nav graph as follows:
Here the first argument is named arg1 which is of type Integer and has the default value of 0. Similarly, the second argument is named arg2 which is of type String and has a default value of “default”.
After we define these arguments, Gradle will generate a class named SecondFragmentArgs which can be used in SecondFragment to retrieve the arguments in the following way.
val safeArgs: SecondFragmentArgs by navArgs()
val arg1 = safeArgs.arg1
val arg2 = safeArgs.arg2
Here we are assured that arg1 is of type Integer and arg2 is of type String and thus we don’t need to cast them to their respective types.
Now in order to pass these arguments from the FirstFragment, another class named FirstFragmentDirections gets created which has a static method named actionFirstToSecond. This can be used to pass the arguments in the following way.
That’s all is required to pass arguments in a type-safe manner. Apart from the inbuilt types, we can also define the custom type of arguments by creating a Parcelable class.
That’s all about in this article.
Conclusion
In this article, we learned about an overview of navigation component in Android. We have also discussed about navigation graph and how to pass arguments safely in Android.
Thanks for reading ! I hope you enjoyed and learned about Navigation Component Concept in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, we will learn when and how to use GradientDrawable in android. The UI of modern-day apps is getting better and better. Designers are trying out different styles and combinations which go best with the Android App. One of the key components of Android being used these days is called GradientDrawable.
A famous quote about learning is :
” I am always ready to learn although I do not always like being taught.”
So Let’s Begin.
Introduction
A GradientDrawable is drawable with a color gradient for buttons, backgrounds, etc.
Let’s start by taking a basic example of creating a button in Android having an aqua colored background:
This can be done simply by setting the android:background attribute in the XML:
Here we have given a <gradient /> tag and added the three color codes – startColor, centerColor and endColor. We have also given the angle as 0 which denotes LEFT-RIGHT orientation.
We note that Angles can only be multiples of 45. Also, max 3 colors can be specified in XML – startColor, centerColor and endColor. All 3 will occupy equal space.
Then make changes in the <Button /> tag to make it use this background drawable:
android:background="@drawable/tri_color_drawable"
2. GradientDrawable
We can get the same functionality programmatically as well by using GradientDrawable. In our activity, create the GradientDrawable.
val gradientDrawable = GradientDrawable(GradientDrawable.Orientation.LEFT_RIGHT,
intArrayOf(
0XFFD98880.toInt(),
0XFFF4D03F.toInt(),
0XFF48C9B0.toInt()
))
Here we have given the orientation as LEFT_RIGHT (which corresponds to 0 we added in XML earlier). We have also given the same three colors we used earlier.
Next, set this gradientDrawable as the background of the button.
val continueBtn: Button = findViewById(R.id.continue_btn)
continueBtn.background = gradientDrawable
The need of GradientDrawable
So why do we need GradientDrawable if the work can be done using XML?
Example 1
Let’s take a recap of the previous example.
Here each color is taking equal space. We can say that color 1 is starting at 0 percent, color 2 at ~33 percent, and color 3 at ~66 percent. What if you don’t want all colors to occupy equal space? Instead, you want something like this:
If we notice here, color 1 is taking half of the entire space (50 percent) whereas the other two colors are equally covering the remaining space. (25 percent each).
This cannot be achieved via XML. This is where the actual power of GradientDrawable is unleashed.
To achieve the above result, we can do something like this:
val gradientDrawable = GradientDrawable(GradientDrawable.Orientation.LEFT_RIGHT,
intArrayOf(
0XFFD98880.toInt(),
0XFFD98880.toInt(),
0XFFF4D03F.toInt(),
0XFF48C9B0.toInt()
))
Here color 1 is used twice, hence it will take 50 percent of the total space. Remaining two colors will equally occupy the remaining 50 percent space.
Example 2
As another example, let’s say we want a 5 colors gradient.
This also is not possible via XML but can be easily done using GradientDrawable like this:
Here we have given 5 colors and each will cover equal space. You can give as many colors as required.
GradientDrawable’s Main Components
let’s take a closer look at GradientDrawable’s main components:
1. Orientation
It can be one of the orientations as defined in the enum GradientDrawable.Orientation. In our examples, we have used LEFT_RIGHT. Other possible orientations are TOP_BOTTOM, TR_BL, RIGHT_LEFT, etc. They are self-explanatory.
2. Array of Colors
Here we need to provide an array of hexadecimal color values. For 0XFFD98880,
0X -Represents a hexadecimal number.
FF – Represents the alpha value to be applied to the color. Alpha can be 0 to 100. Here FF represents 100% alpha.
D98880 – Represents the RRGGBB hexadecimal color value.
That’s all about in this article.
Conclusion
In this article, we learned about how to use GradientDrawable in Android. We have also discussed different ways and main components of GradientDrawable in Android with examples.
Thanks for reading ! I hope you enjoyed and learned about GradientDrawable Concept in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, we will learn how Flow APIs work in Kotlin and how can we start using it in our android projects.
If we are working as an Android developer and looking to build an app asynchronously we might be using RxJava as it has an operator for almost everything. RxJava has become one of the most important things to know in Android.
But with Kotlin a lot of people tend to use Co-routines. With Kotlin Coroutine 1.2.0 alpha release Jetbrains came up with Flow API as part of it. With Flow in Kotlin now you can handle a stream of data that emits values sequentially.
“In Kotlin, Coroutine is just the scheduler part of RxJava but now with Flow APIs coming along side it, it can be alternative to RxJava in Android.”
We will cover the following topics to understanding the Flow API :
What is Flow APIs in Kotlin Coroutines?
Start Integrating Flow APIs in your project
Builders in Flows
Few examples using Flow Operators.
A famous quote about learning is :
“Learn as though you would never be able to master it; hold it as though you would be in fear of losing it.”
So Let’s begin.
What is Flow APIs in Kotlin Coroutines?
Flow API in Kotlin is a better way to handle the stream of data asynchronously that executes sequentially.
So, in RxJava, Observables type is an example of a structure that represents a stream of items. Its body does not get executed until it is subscribed to by a subscriber and once it is subscribed, subscriber starts getting the data items emitted. Similarly, Flow works on the same condition where the code inside a flow builder does not run until the flow is collected.
Start Integrating Flow APIs in your Project
Let us create an android project and then let’s start integrating the Kotlin Flow APIs.
Now, let’s begin the implementation of Flow APIs in MainActivity. In onCreate() function of Activity lets add two function like,
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
setupFlow()
setupClicks()
}
Here, setupFlow() is the function where we will define the flow and setupClicks() is the function where we will click the button to display the data which is emitted from the flow.
We will declare a lateinit variable of Flow of Int type,
lateinit var flow: Flow<Int>
Step 04
Now, in setupFlow() I will emit items after 500milliseconds delay.
To emit the number we will use emit() which collects the value emitted. It is part of FlowCollector which can be used as a receiver.
and, at last, we use flowOn operator which means that shall be used to change the context of the flow emission. Here, we can use different Dispatchers like IO, Default, etc.
” flowOn() is like subscribeOn() in RxJava.”
Step 05
Now, we need to write setupClicks() function where we need to print the values which we will emit from the flow.
When we click the button we will print the values one by one.
Here,
flow.collect now will start extracting/collection the value from the flow on the Main thread as Dispatchers.Main is used in launch coroutine builder in CoroutineScope. The output which will be printed in Logcat is,
Here, we converted a range of values from 1 to 5 as flow and emitted each of them at a delay of 300ms. When we attach a collector to the flow, we get the output,
If both flows doesn’t have the same number of item, then the flow will stop as soon as one of the flow completes.
That’s all about in this article.
Conclusion
In this article, we learned about how Flow APIs work in Kotlin and how can we start using it in our android projects. We have also discussed builders in flow and zip operators.
Thanks for reading ! I hope you enjoyed and learned about Flow API concepts in Kotlin. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, we will learn how to optimize application size (APK) in Android. Most of the user would not like to download a large APK as it might consume most of his Network/Wifi Bandwidth, also most importantly, space inside the mobile device. The size of our APK has an impact on how fast our app loads, how much memory it uses, and how much power it consumes.
It’s important to optimize the size of the app since mobiles are always memory and space constraint devices. We will discuss about the various ways in which we can improve our apk size in Android-Development.
A famous quote about learning is :
” That is what learning is. You suddenly understand something you’ve understood all your life, but in a new way.”
So Let’s begin.
Understanding Android App Bundles
An Android App Bundle is a publishing format that includes all your app’s compiled code and resources, and defers APK generation and signing to Google Play.
Google Play uses our app bundle to generate and serve optimized APKs for each device configuration, so only the code and resources that are needed for a specific device are downloaded to run our app. We no longer have to build, sign, and manage multiple APKs to optimize support for different devices, and users get smaller, more optimized downloads.
App Bundles are Publishing formats
An Android App Bundle is a file (with the .aab file extension) that we upload to Google Play. App bundles are signed binaries that organize your app’s code and resources into modules. Code and resources for each module are organized similarly to what you would find in an APK—and that makes sense because each of these modules may be generated as separate APKs. Google Play then uses the app bundle to generate the various APKs that are served to users, such as the base APK, feature APKs, configuration APKs, and (for devices that do not support split APKs) multi-APKs. The directories that are colored in blue—such as the drawable/, values/, and lib/ directories—represent code and resources that Google Play uses to create configuration APKs for each module.
Android App Bundles — File Targeting and Serving
How do these Android App bundles help in File targeting? It’s simple, let’s say we have hdpi, xhdpi, xxhdpi resources in our application. Based on the device on which the application is being downloaded, if it’s a hdpi device (for example), only the resources from hdpi will be installed on the device.
If the application is targeting multiple languages (English, Spanish, French, etc.), only the specific string resources will be downloaded onto the device.
This helps in saving the space on the device’s memory.
Building the Android App Bundle
Building the Android app bundle is straight forward. Just select the Build option from the Android Studio menu and select Build Bundles.
Android Size Analyzer
In order to understand which files are actually taking up more space in the application, use the Android Size Analyser plugin inside Android Studio. For installing the plugin.
Select File > Settings (or on Mac, Android Studio > Preferences.)
Select the Plugins section in the left panel.
Click the Marketplace tab.
Search for the “Android Size Analyzer” plugin.
Click the Install button for the analyzer plugin.
Restart the IDE after installing the plugin. Now, to analyze the application, go to Analyze > Analyze App Size from the menu bar. We will get a window something similar to this:
The recommendations can help us in reducing the app size in a much better way.
Remove Unused Resources
As we have already discussed the size of the apk has an impact on how fast the app loads, how much memory it uses, and how much memory power it consumes. Hence, one of the main things that can be implemented to reduce apk size is to remove unused resources in the application.
Also, it is advised to use scalable drawable objects(importing vector assets) instead of other image formats like PNG, JPEG, etc.
Using Vector Drawable is one of the best ways to reduce the size significantly.
Using Lint
Lint actually helps in generating warnings or unused code inside the application. So this can actually help in removing the same thereby helping in reducing the size of the application.
Reduce libraries size
Check if we can reduce the size when it comes to the usage of libraries. For example, use only specific libraries of Google Play Services. Compile only which is required.
Reuse Code
Object-Oriented Programming has solved a lot of problems in the programming world. Try reusing the code as much as possible instead of repetitive code. Repetitive code also leads to increased file size thereby effecting the Apk size.
Compress PNG and JPEG files
If using PNG and JPEG files is something mandatory in our project, we can compress them using image quality tools like TinyPNG.
In most of the applications, Images are used to convey messages or improve the UX. But the biggest drawback here might be using a lot of images that can bloat up the size of the app. Ensure that the Image Compression techniques are understood and implemented to reduce the size of the apk before releasing the app to the play store.
Use WebP file format
As we have seen in the image shared for the Android Analyser plugin above, one of the recommendations was to change the PNG file to a WebP file format.
Use Proguard
Every time we build a new project, we see the following piece of code in the app-level build.gradle file.
ProGuard makes the following impact on our project,
It reduces the size of the application.
It removes the unused classes and methods that contribute to the 64K method counts limit of an Android application.
It makes the application difficult to reverse engineer by obfuscating the code.
Create Multiple APKs
If we are not using App Bundles, we can go with the traditional way. We can create multiple APKs similar to App Bundle. Multiple apk is mainly used to generate specific APKs for different screen densities and different CPU architecture.
ShrinkResources
Reduce resources where ever possible. Using shrinkResources attribute in the Gradle will remove all the resources which are not being used anywhere in the project. Enable this in your app-level build.gradle file by adding below line:
Remove the localized resources which are not needed by using resConfigs. All the support libraries may have localized folders for the other languages which we don’t need.
The Gradle resource shrinker removes only resources that are not referenced by our app code, which means it will not remove alternative resources(device/location-specific) for different device configurations. If necessary, you can use the Android Gradle plugin’s resConfigs property to remove alternative resource files that our app does not need.
The following snippet shows how to limit our language resources to just English and French:
As discussed, having unused language resources only swells the apk size. Hence it is important to remove unused files and resources.
DebugImplementation
Remove any debug library you have in the app. It can be done by using debugImplementation while building testing debug apk.
Use R8 to reduce APK size
R8 shrinking is a process in which we reduce the amount of code of our application and by doing so, the APK size automatically gets reduced. R8 does most of the work as Proguard.
So Why do we need to prefer it?
The reason is it works with Proguard rules and shrinks the code faster while improving the output size.
So understanding and implementing these different methods in our applications to deliver an optimized apk.
That’s all about in this article.
Conclusion
In this article, we learned about different methods to optimize app size in Android.
Thanks for reading ! I hope you enjoyed and learned about Reducing application size in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.