Hello Readers, CoolMonkTechie heartily welcomes you in this article (An Overview Of Strict Mode).
In this article, we will learn about Strict Mode in Android. Performing any kind of long blocking operations or disk IO operations on the Android Main thread can cause ANR (Application Not Responding) issues. We may not even realise that we have a potential ANR until it is too late and is already in our user’s hands. most of the cases, the library or framework which we are using in our application, will not allow us to perform disk operations on the main thread (Room for instance, makes it explicit when we want to turn it off).
So, how do we know and correct our mistakes done during android application development when the libraries or frameworks don’t explicitly prevent this kind of operation?
Android provides a Strict Mode developer tool for that.
To understand about the Strict Mode, we cover the below topics as below :
Overview
Strict Mode Policies
Different Ways to Notify Strict Mode
Enabling Strict Mode
Recommendations
A famous quote about learning is :
“If you live long enough, you’ll make mistakes. But if you learn from them, you’ll be a better person.“
So Let’s begin.
Overview
Strict Mode is a developer tool which detects things we might be doing by accident and brings them to our attention so we can fix them.
Best practice in Android says “keeping the disk and network operations off from the main thread makes applications much smoother and more responsive”. So StrictMode is use to catch the coding issues such as disk I/O or network access on the application’s main thread i.e. UI thread. By keeping our application’s main thread responsive, we also prevent ANR dialogs from being shown to users.
This is a debugging tool introduced in Android 2.3 (API level 9) but more features were added in Android 3.0.
StrictMode Policies
Strict Mode has two types of policies and each policy has various rules. Each policy also has various methods of showing when a rule is violated.
Thread Policy
Thread policies are focused on the things which is not recommended to do in main thread like disk or network operations. The Thread policy can monitor following violations:
Disk Reads
Disk Writes
Network access
Resource Mismatch
Custom Slow Code
VM Policy
VM policies are focused on memory leaks because of bad coding practices like forgot to close the SQLite cursor or leaks in Activities. The VM policy can monitor following violation:
Activity leaks
SQLite objects leaks
Closable objects leaks
Registration objects leaks
Class instance limit
File URL exposure
Different Ways to Notify Strict Mode
There are variety of different ways by which user/developer get to know when a rule you set has been violated. In terms of Strict Mode, it is known as Penalty.
Some of methods are listed below:
penaltyDeath(): Crash the whole process on violation.
penaltyDeathOnNetwork(): Crash the whole process on any network usage.
penaltyDialog(): Show an annoying dialog to the developer on detected violations.
penaltyFlashScreen(): Flash the screen during a violation.
penaltyLog(): Log detected violations to the system log.
Enabling Strict Mode
To enable and configure the Strict Mode in our application, we require to use setThreadPolicy() and setVmPolicy() methods of Strict Mode. It is a good practice to set policies either in Application , Activity or other application component’s Application.onCreate() method:
Now, we can decide what should happen when a violation is detected like in the above example we have used only penaltyLog() for Thread Policy but in the VM Policy we used penaltyLog() as well as penaltyDeath() to notify. We can watch the output of adb logcat while we use our application to see the violations as they happen.
Here is the example of penaltyLog() showing the logs which explains Strict Mode is warning us that we are using disk write operation on the main thread.
DEBUG/StrictMode(3134): StrictMode policy violation; ~duration=319 ms: android.os.StrictMode$StrictModeDiskWriteViolation: policy=31 violation=1
DEBUG/StrictMode(3134): at android.os.StrictMode$AndroidBlockGuardPolicy.onWriteToDisk(StrictMode.java:1041)
DEBUG/StrictMode(3134): at android.database.sqlite.SQLiteStatement.acquireAndLock(SQLiteStatement.java:219)
DEBUG/StrictMode(3134): at android.database.sqlite.SQLiteStatement.executeUpdateDelete(SQLiteStatement.java:83)
DEBUG/StrictMode(3134): at android.database.sqlite.SQLiteDatabase.updateWithOnConflict(SQLiteDatabase.java:1829)
DEBUG/StrictMode(3134): at android.database.sqlite.SQLiteDatabase.update(SQLiteDatabase.java:1780)
DEBUG/StrictMode(3134): at com.test.data.MainActivity.update(MainActivity.java:87)
Recommendations
If we find violations that we feel are problematic, there are a variety of tools to help solve them: threads, Handler, AsyncTask, IntentService, etc. But don’t feel compelled to fix everything that Strict Mode finds. In particular, many cases of disk access are often necessary during the normal activity lifecycle. Use Strict Mode to find things we did by accident. Network requests on the UI thread are almost always a problem, though.
It is not a security mechanism and is not guaranteed to find all disk or network accesses. While it does propagate its state across process boundaries when doing Binder calls, it’s still ultimately a best effort mechanism. Notably, disk or network access from JNI calls won’t necessarily trigger it.
Many of violations are not related to our application. We want to suppress these violations. For example- DiskRead checking (or suppress any other checkingof StrictMode that is it violating). Here, we want to do following things in function:
Turn off the Strict Mode checking
Running the function or the parameterized block of code
Turn on the original Strict Mode checking
The function looks like the below:
fun permitDiskReads(func: () -> Any?) : Any? {
if (BuildConfig.DEBUG) {
val oldThreadPolicy = StrictMode.getThreadPolicy()
StrictMode.setThreadPolicy(
StrictMode.ThreadPolicy.Builder(oldThreadPolicy)
.permitDiskReads().build())
val anyValue = func()
StrictMode.setThreadPolicy(oldThreadPolicy)
return anyValue
} else {
return func()
}
}
The above example, we are just suppressing the detectDiskReads(), we could perform other suppression if required. it‘s not only running the function, but returning Any value (i.e. nothing or something if we don’t know what Any means) the function might return.
This is useful when the code that we want to suppress the check might be a function that returns something. For example, we have a SampleManager.getInstance() that also have some disk reads violation we want to suppress. So we could do as below:
val sampleManager =
permitDiskReads { SampleManager.getInstance() } as SampleManager
That’s all about in this article.
Conclusion
In this article, we understood about about Strict Mode in Android. This article explained about Strict Mode which is a very useful tool to find and fix performance issues, object leaks, and other hard-to-find runtime issues for Android developers. We may think that we are doing everything off the main thread, but sometimes small things can creep in and cause these issues. It helps keep our applications in check and should definitely be enabled whilst developing our applications.
Thanks for reading! I hope you enjoyed and learned about Strict Modeconcepts in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (An Overview Of Memory Safety In Swift) .
In this article, we will learn about an overview of Memory Safety in Swift. Swift handles most memory safety automatically, to avoid conflicts. However, certain functions may cause conflicts and will either cause compile time or runtime errors. This article reviews the access conflict to memory (like In-Out Parameters, self in Methods and Properties) in swift.
A famous quote about learning is :
” One learns from books and example only that certain things can be done. Actual learning requires that you do those things.”
So Let’s begin.
Memory Safety Overview
Swift monitors risky behaviour that may occur in the code. For example, Swift ensures that variables are introduced before they’re utilized, likewise, memory isn’t accessed once its deallocated, and array indices are checked for out-of-bounds errors.
Swift also makes sure that multiple accesses to the same area of memory don’t conflict, by requiring code that modifies a location in memory to have exclusive access to that memory. Because Swift manages memory automatically, most of the time we don’t have to think about accessing memory at all. However, it’s important to understand where potential conflicts can occur, so we can avoid writing code that has conflicting access to memory. If our code does contain conflicts, we’ll get a compile-time or runtime error.
Understanding Conflicting Access to Memory
Memory access happens in our code when we do things like set the value of a variable or pass an argument to a function. For example, the following code contains both a read access and a write access:
// A write access to the memory where one is stored.
var one = 1
// A read access from the memory where one is stored.
print("We're number \(one)!")
A conflicting access to memory can occur when different parts of our code are trying to access the same location in memory at the same time. Multiple accesses to a location in memory at the same time can produce unpredictable or inconsistent behavior. In Swift, there are ways to modify a value that span several lines of code, making it possible to attempt to access a value in the middle of its own modification.
We can see a similar problem by thinking about how we update a budget that’s written on a piece of paper. Updating the budget is a two-step process: First, we add the items’ names and prices, and then we change the total amount to reflect the items currently on the list. Before and after the update, we can read any information from the budget and get a correct answer, as shown in the figure below.
While we’re adding items to the budget, it’s in a temporary, invalid state because the total amount hasn’t been updated to reflect the newly added items. Reading the total amount during the process of adding an item gives us incorrect information.
This example also demonstrates a challenge we may encounter when fixing conflicting access to memory: There are sometimes multiple ways to fix the conflict that produce different answers, and it’s not always obvious which answer is correct. In this example, depending on whether we wanted the original total amount or the updated total amount, either $5 or $320 could be the correct answer. Before we can fix the conflicting access, we have to determine what it was intended to do.
” If we’ve written concurrent or multithreaded code, conflicting access to memory might be a familiar problem. However, the conflicting access discussed here can happen on a single thread and doesn’t involve concurrent or multithreaded code. ”
” If we have conflicting access to memory from within a single thread, Swift guarantees that we’ll get an error at either compile time or runtime. For multithreaded code, use Thread Sanitizer to help detect conflicting access across threads. “
Characteristics of Memory Access
There are three characteristics of memory access to consider in the context of conflicting access: whether the access is a read or a write, the duration of the access, and the location in memory being accessed. Specifically, a conflict occurs if we have two accesses that meet all of the following conditions:
At least one is a write access or a nonatomic access.
They access the same location in memory.
Their durations overlap.
The difference between a read and write access is usually obvious: a write access changes the location in memory, but a read access doesn’t. The location in memory refers to what is being accessed—for example, a variable, constant, or property. The duration of a memory access is either instantaneous or long-term.
An operation is atomic if it uses only C atomic operations; otherwise it’s nonatomic. An access is instantaneous if it’s not possible for other code to run after that access starts but before it ends. By their nature, two instantaneous accesses can’t happen at the same time. Most memory access is instantaneous. For example, all the read and write accesses in the code listing below are instantaneous:
func oneMore(than number: Int) -> Int {
return number + 1
}
var myNumber = 1
myNumber = oneMore(than: myNumber)
print(myNumber)
// Prints "2"
However, there are several ways to access memory, called long-term accesses, that span the execution of other code. The difference between instantaneous access and long-term access is that it’s possible for other code to run after a long-term access starts but before it ends, which is called overlap. A long-term access can overlap with other long-term accesses and instantaneous accesses.
Overlapping accesses appear primarily in code that uses in-out parameters in functions and methods or mutating methods of a structure.
Conflicting Access to In-Out Parameters
In this section, we will discuss the specific kinds of Swift code that use long-term accesses. A function has long-term write access to all of its in-out parameters. The write access for an in-out parameter starts after all of the non-in-out parameters have been evaluated and lasts for the entire duration of that function call. If there are multiple in-out parameters, the write accesses start in the same order as the parameters appear.
One consequence of this long-term write access is that we can’t access the original variable that was passed as in-out, even if scoping rules and access control would otherwise permit it—any access to the original creates a conflict. For example:
var stepSize = 1
func increment(_ number: inout Int) {
number += stepSize
}
increment(&stepSize)
// Error: conflicting accesses to stepSize
In the code above, stepSize is a global variable, and it is normally accessible from within increment(_:). However, the read access to stepSize overlaps with the write access to number.
As shown in the figure above, both number and stepSize refer to the same location in memory. The read and write accesses refer to the same memory and they overlap, producing a conflict.
One way to solve this conflict is to make an explicit copy of stepSize:
When we make a copy of stepSize before calling increment(_:), it’s clear that the value of copyOfStepSize is incremented by the current step size. The read access ends before the write access starts, so there isn’t a conflict.
Another consequence of long-term write access to in-out parameters is that passing a single variable as the argument for multiple in-out parameters of the same function produces a conflict. For example:
func balance(_ x: inout Int, _ y: inout Int) {
let sum = x + y
x = sum / 2
y = sum - x
}
var playerOneScore = 42
var playerTwoScore = 30
balance(&playerOneScore, &playerTwoScore) // OK
balance(&playerOneScore, &playerOneScore)
// Error: conflicting accesses to playerOneScore
In the above code, the balance(_:_:) function modifies its two parameters to divide the total value evenly between them. Calling it with playerOneScore and playerTwoScore as arguments doesn’t produce a conflict—there are two write accesses that overlap in time, but they access different locations in memory. In contrast, passing playerOneScore as the value for both parameters produces a conflict because it tries to perform two write accesses to the same location in memory at the same time.
Because operators are functions, they can also have long-term accesses to their in-out parameters. For example, if balance(_:_:) was an operator function named <^>, writing playerOneScore <^> playerOneScore would result in the same conflict as balance(&playerOneScore, &playerOneScore).
Conflicting Access to self in Methods
A mutating method on a structure has write access to self for the duration of the method call. For example, consider a game where each player has a health amount, which decreases when taking damage, and an energy amount, which decreases when using special abilities.
struct Player {
var name: String
var health: Int
var energy: Int
static let maxHealth = 10
mutating func restoreHealth() {
health = Player.maxHealth
}
}
In the restoreHealth() method above, a write access to self starts at the beginning of the method and lasts until the method returns. In this case, there’s no other code inside restoreHealth() that could have an overlapping access to the properties of a Player instance. The shareHealth(with:) method below takes another Player instance as an in-out parameter, creating the possibility of overlapping accesses.
extension Player {
mutating func shareHealth(with teammate: inout Player) {
balance(&teammate.health, &health)
}
}
var oscar = Player(name: "Oscar", health: 10, energy: 10)
var maria = Player(name: "Maria", health: 5, energy: 10)
oscar.shareHealth(with: &maria) // OK
In the example above, calling the shareHealth(with:) method for Oscar’s player to share health with Maria’s player doesn’t cause a conflict. There’s a write access to oscar during the method call because oscar is the value of self in a mutating method, and there’s a write access to maria for the same duration because maria was passed as an in-out parameter. As shown in the figure below, they access different locations in memory. Even though the two write accesses overlap in time, they don’t conflict.
However, if we pass oscar as the argument to shareHealth(with:), there’s a conflict:
oscar.shareHealth(with: &oscar)
// Error: conflicting accesses to oscar
The mutating method needs write access to self for the duration of the method, and the in-out parameter needs write access to teammate for the same duration. Within the method, both self and teammate refer to the same location in memory—as shown in the figure below. The two write accesses refer to the same memory and they overlap, producing a conflict.
Conflicting Access to Properties
Types like structures, tuples, and enumerations are made up of individual constituent values, such as the properties of a structure or the elements of a tuple. Because these are value types, mutating any piece of the value mutates the whole value, meaning read or write access to one of the properties requires read or write access to the whole value. For example, overlapping write accesses to the elements of a tuple produces a conflict:
var playerInformation = (health: 10, energy: 20)
balance(&playerInformation.health, &playerInformation.energy)
// Error: conflicting access to properties of playerInformation
In the example above, calling balance(_:_:) on the elements of a tuple produces a conflict because there are overlapping write accesses to playerInformation. Both playerInformation.health and playerInformation.energy are passed as in-out parameters, which means balance(_:_:) needs write access to them for the duration of the function call. In both cases, a write access to the tuple element requires a write access to the entire tuple. This means there are two write accesses to playerInformation with durations that overlap, causing a conflict.
The code below shows that the same error appears for overlapping write accesses to the properties of a structure that’s stored in a global variable.
In practice, most access to the properties of a structure can overlap safely. For example, if the variable holly in the example above is changed to a local variable instead of a global variable, the compiler can prove that overlapping access to stored properties of the structure is safe:
func someFunction() {
var oscar = Player(name: "Oscar", health: 10, energy: 10)
balance(&oscar.health, &oscar.energy) // OK
}
In the example above, Oscar’s health and energy are passed as the two in-out parameters to balance(_:_:). The compiler can prove that memory safety is preserved because the two stored properties don’t interact in any way.
The restriction against overlapping access to properties of a structure isn’t always necessary to preserve memory safety. Memory safety is the desired guarantee, but exclusive access is a stricter requirement than memory safety—which means some code preserves memory safety, even though it violates exclusive access to memory. Swift allows this memory-safe code if the compiler can prove that the nonexclusive access to memory is still safe. Specifically, it can prove that overlapping access to properties of a structure is safe if the following conditions apply:
We’re accessing only stored properties of an instance, not computed properties or class properties.
The structure is the value of a local variable, not a global variable.
The structure is either not captured by any closures, or it’s captured only by non-escaping closures.
If the compiler can’t prove the access is safe, it doesn’t allow the access.
That’s all about in this article.
Conclusion
In this article, we understood an overview of Memory Safety in Swift. This article reviewed the access conflict to memory (like In-Out Parameters, self in Methods and Properties) in swift.
Thanks for reading! I hope you enjoyed and learned about Memory Safety Concepts in Swift. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find Other articles of CoolmonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (An Overview Of Jetpack DataStore).
In this article, we will learn about Google’s new library Jetpack DataStore in Android. JetpackDataStore is Google’s new library to persist data as key-value pairs or typed objects using protocol buffers. Using Kotlin coroutines and Flow as its foundation, it aims to replace SharedPreferences. This is part of the Jetpack suite of libraries. This article explains about Jetpack DataStore types implementations and address the limitations of the SharedPreferences API in Android.
To understand the Jetpack DataStore, we cover the below topics as below :
Overview
Limitation of SharedPreferences
Types of Jetpack DataStore Implementations
Jetpack DataStore Setup
Store key-value pairs with Preferences DataStore
Store typed objects with Proto DataStore
Use Jetpack DataStore in synchronous code
A famous quote about learning is :
“One learns from books and example only that certain things can be done. Actual learning requires that you do those things.”
So Let’s begin.
Overview
Jetpack DataStore is a data storage solution that allows us to store key-value pairs or typed objects with protocol buffers. DataStore uses Kotlin coroutines and Flow to store data asynchronously, consistently, and transactionally. Google introduced DataStore to address the limitations in the SharedPreferences API.
Limitation of SharedPreferences
To understand the DataStore’s advantages, we need to know about the limitations of SharedPreferences API. Even though SharedPreferences has been around since API level 1, it has drawbacks that have persisted over time:
SharedPreferences is not always safe call on the UI thread. It can cause junk by blocking the UI thread.
There is no way for SharedPreferences to signal errors except for parsing errors as runtime exceptions.
SharedPreferences has no support for data migration. If we want to change the type of a value, we have to write the entire logic manually.
SharedPreferences doesn’t provide type safety. Application will compile fine If we try to store both Booleans and Integers using the same key.
Google introduced DataStore to address the above limitations.
Types of Jetpack DataStore Implementations
DataStore provides two different implementations: Preferences DataStore and Proto DataStore.
Preferences DataStore: Stores and accesses data using keys. This implementation does not require a predefined schema, and it does not provide type safety. This is similar to SharedPreferences. We use this to store and retrieve primitive data types.
Proto DataStore: Uses protocol buffers to store custom data types. When using Proto DataStore, we need to define a schema for the custom data type.
SharedPreferences uses XML to store data. As the amount of data increases, the file size increases dramatically and it’s more expensive for the CPU to read the file.
Protocol buffers are a new way to represent structured data that’s faster and than XML and has a smaller size. They’re helpful when the read-time of stored data affects the performance of our application.
Jetpack DataStore Setup
To use Jetpack DataStore in application, we add the following dependencies to Gradle file depending on which implementation we want to use:
Datastore Typed
// Typed DataStore (Typed API surface, such as Proto)
dependencies {
implementation("androidx.datastore:datastore:1.0.0")
// optional - RxJava2 support
implementation("androidx.datastore:datastore-rxjava2:1.0.0")
// optional - RxJava3 support
implementation("androidx.datastore:datastore-rxjava3:1.0.0")
}
// Alternatively - use the following artifact without an Android dependency.
dependencies {
implementation("androidx.datastore:datastore-core:1.0.0")
}
Datastore Preferences
// Preferences DataStore (SharedPreferences like APIs)
dependencies {
implementation("androidx.datastore:datastore-preferences:1.0.0")
// optional - RxJava2 support
implementation("androidx.datastore:datastore-preferences-rxjava2:1.0.0")
// optional - RxJava3 support
implementation("androidx.datastore:datastore-preferences-rxjava3:1.0.0")
}
// Alternatively - use the following artifact without an Android dependency.
dependencies {
implementation("androidx.datastore:datastore-preferences-core:1.0.0")
}
If we use the datastore-preferences-core artifact with Proguard, we must manually add Proguard rules to our proguard-rules.pro file to keep our fields from being deleted.
Store key-value pairs with Preferences DataStore
The Preferences DataStore implementation uses the DataStore and Preferences classes to persist simple key-value pairs to disk.
Create a Preferences DataStore
// At the top level of our kotlin file:
val Context.dataStore: DataStore<Preferences> by preferencesDataStore(name = "settings")
Use the property delegate created by preferencesDataStore to create an instance of Datastore<Preferences>. Call it once at the top level of kotlin file, and access it through this property throughout the rest of application. This makes it easier to keep DataStore as singleton.
Read from a Preferences DataStore
val EXAMPLE_COUNTER = intPreferencesKey("example_counter")
val exampleCounterFlow: Flow<Int> = context.dataStore.data
.map { preferences ->
// No type safety.
preferences[EXAMPLE_COUNTER] ?: 0
}
Because Preferences DataStore does not use a predefined schema, we must use the corresponding key type function to define a key for each value that we need to store in the DataStore<Preferences> instance.
Preferences DataStore provides an edit() function that transactionally updates the data in a DataStore. The function’s transform parameter accepts a block of code where we can update the values as needed. All of the code in the transform block is treated as a single transaction.
Store typed objects with Proto DataStore
The Proto DataStore implementation uses DataStore and protocol buffers to persist typed objects to disk.
Define a schema
Proto DataStore requires a predefined schema in a proto file in the app/src/main/proto/ directory. This schema defines the type for the objects that we persist in our Proto DataStore.
The class for our stored objects is generated at compile time from the message defined in the proto file. Make sure we rebuild our project.
Create a Proto DataStore
There are two steps involved in creating a Proto DataStore to store typed objects:
Step 1 : Define a class that implements Serializer<T>, where T is the type defined in the proto file. This serializer class tells DataStore how to read and write data type. Make sure we include a default value for the serializer to be used if there is no file created yet.
Step 2 : Use the property delegate created by dataStore to create an instance of DataStore<T>, where T is the type defined in the proto file. Call this once at the top level of kotlin file and access it through this property delegate throughout the rest of application. The filename parameter tells DataStore which file to use to store the data, and the serializer parameter tells DataStore the name of the serializer class defined in step 1.
We use DataStore.datato expose a Flow of the appropriate property from our stored object.
val exampleCounterFlow: Flow<Int> = context.settingsDataStore.data
.map { settings ->
// The exampleCounter property is generated from the proto schema.
settings.exampleCounter
}
Write to a Proto DataStore
Proto DataStore provides an updateData() function that transactionally updates a stored object. updateData() gives us the current state of the data as an instance of our data type and updates the data transactionally in an atomic read-write-modify operation.
Asynchronous API is one of the primary benefits of DataStore. It may not always be feasible to change our surrounding code to be asynchronous. This might be the case if we’re working with an existing codebase that uses synchronous disk I/O or if we have a dependency that doesn’t provide an asynchronous API.
Kotlin coroutines provide the runBlocking() coroutine builder to help bridge the gap between synchronous and asynchronous code. We can use runBlocking() to read data from DataStore synchronously. RxJava offers blocking methods on Flowable. The following code blocks the calling thread until DataStore returns data:
val exampleData = runBlocking { context.dataStore.data.first() }
Performing synchronous I/O operations on the UI thread can cause ANRs or UI junk. We can mitigate these issues by asynchronously preloading the data from DataStore:
override fun onCreate(savedInstanceState: Bundle?) {
lifecycleScope.launch {
context.dataStore.data.first()
// You should also handle IOExceptions here.
}
}
This way, DataStore asynchronously reads the data and caches it in memory. Later synchronous reads using runBlocking() may be faster or may avoid a disk I/O operation altogether if the initial read has completed.
That’s all about in this article.
Conclusion
In this article, we understood about Google’s new library Jetpack DataStore in Android. This article explained about Jetpack DataStore types implementations and address the limitations of the SharedPreferences API in Android.
Thanks for reading! I hope you enjoyed and learned about Jetpack DataStoreconcepts in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (An Overview Of Application Security Best Practices).
In this article, we will learn about the best practices of Application Security in Android. By making our application more secure in android, we help preserve user trust and device integrity. This article explains about few best practices that have a significant, positive impact on our application’s security.
To understand the Application Security Best Practices, we cover the below topics as below :
Enforce secure communication with other applications
Provide the right permissions
Store data safely
Keep the services and related dependencies up-to-date
A famous quote about learning is :
“Being a student is easy. Learning requires actual work.”
So Let’s begin.
1. Enforce secure communication with other applications
When we safeguard the data that we want to exchange between our application and other applications, or between our application and a website, we improve our application’s stability and protect the data that we want to send and receive.
1.1 Use implicit intents and non-exported content providers
1.1.1 Show an application chooser
Use implicit intents to show application chooser that provides option to user to launch at least two possible applications on the device for the requested action. This allows users to transfer sensitive information to the application that they trust.
val intent = Intent(Intent.ACTION_SEND)
val possibleActivitiesList: List<ResolveInfo> =
packageManager.queryIntentActivities(intent, PackageManager.MATCH_ALL)
// Verify that an activity in at least two applications on the user's device
// can handle the intent. Otherwise, start the intent only if an application
// on the user's device can handle the intent.
if (possibleActivitiesList.size > 1) {
// Create intent to show chooser.
// Title is something similar to "Share this photo with".
val chooser = resources.getString(R.string.chooser_title).let { title ->
Intent.createChooser(intent, title)
}
startActivity(chooser)
} else if (intent.resolveActivity(packageManager) != null) {
startActivity(intent)
}
1.1.2 Apply signature-based permissions
Apply signature-based permissions while sharing data between two applications that is controlled by us. These permissions do not need user confirmation, but instead it checks that the applications accessing the data are signed using the same signing key. Hence offer more streamlined and secure user experience.
1.1.3 Disallow access to our application’s content providers
Unless we intend to send data from our application to a different application that we don’t own, we should explicitly disallow other developers’ apps from accessing the ContentProvider objects that our application contains. This setting is particularly important if our application can be installed on devices running Android 4.1.1 (API level 16) or lower, as the android:exported attribute of the <provider> element is true by default on those versions of Android.
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.myapp">
<application ... >
<provider
android:name="android.support.v4.content.FileProvider"
android:authorities="com.example.myapp.fileprovider"
...
android:exported="false">
<!-- Place child elements of <provider> here. -->
</provider>
...
</application>
</manifest>
1.2 Ask for credentials before showing sensitive information
When we are requesting the credentials so that we can access sensitive information or premium content in our application, ask for either a PIN/password/pattern or a biometric credential, such as using face recognition or fingerprint recognition.
1.3 Apply network security measures
Ensure network security with Security with HTTPS and SSL — For any kind of network communication we must use HTTPS (instead of plain http) with proper certificate implementation. This section describes how we can improve our application’s network security.
1.3.1 Use SSL traffic
If our application communicates with a web server that has a certificate issued by a well-known, trusted CA, the HTTPS request is very simple:
val url = URL("https://www.google.com")
val urlConnection = url.openConnection() as HttpsURLConnection
urlConnection.connect()
urlConnection.inputStream.use {
...
}
1.3.2 Add a network security configuration
If our application uses new or custom CAs, we can declare our network’s security settings in a configuration file. This process allows us to create the configuration without modifying any application code.
To add a network security configuration file to our application, we can follow these steps:
Declare the configuration in our application’s manifest:
<manifest ... >
<application
android:networkSecurityConfig="@xml/network_security_config"
... >
<!-- Place child elements of <application> element here. -->
</application>
</manifest>
2. We add an XML resource file, located at res/xml/network_security_config.xml.
Specify that all traffic to particular domains should use HTTPS by disabling clear-text:
During the development process, we can use the <debug-overrides> element to explicitly allow user-installed certificates. This element overrides our application’s security-critical options during debugging and testing without affecting the application’s release configuration.
The following snippet shows how to define this element in our application’s network security configuration XML file:
Our SSL checker shouldn’t accept every certificate. We may need to set up a trust manager and handle all SSL warnings that occur if one of the following conditions applies to our use case:
We’re communicating with a web server that has a certificate signed by a new or custom CA.
That CA isn’t trusted by the device we’re using.
You cannot use a network security configuration.
1.4 Use WebView objects carefully
Whenever possible, we load only allowlisted content in WebView objects. In other words, the WebView objects in our application shouldn’t allow users to navigate to sites that are outside of our control. In addition, we should never enable JavaScript interface support unless we completely control and trust the content in our application’s WebView objects.
1.4.1 Use HTML message channels
If our application must use JavaScript interface support on devices running Android 6.0 (API level 23) and higher, use HTML message channels instead of communicating between a website and your app, as shown in the following code snippet:
val myWebView: WebView = findViewById(R.id.webview)
// channel[0] and channel[1] represent the two ports.
// They are already entangled with each other and have been started.
val channel: Array<out WebMessagePort> = myWebView.createWebMessageChannel()
// Create handler for channel[0] to receive messages.
channel[0].setWebMessageCallback(object : WebMessagePort.WebMessageCallback() {
override fun onMessage(port: WebMessagePort, message: WebMessage) {
Log.d(TAG, "On port $port, received this message: $message")
}
})
// Send a message from channel[1] to channel[0].
channel[1].postMessage(WebMessage("My secure message"))
2. Provide the right permissions
Application should request only the minimum number of permissions necessary to function properly.
2.1 Use intents to defer permissions
It should not add a permission to complete an action that could be completed in another application. Instead, we use an intent to defer the request to a different application that already has the necessary permission.
For example, If an application requires to create a contact to a contact application, it delegates the responsibility of creating the contact to a contacts application, which has already been granted the appropriate WRITE_CONTACTS permission.
// Delegates the responsibility of creating the contact to a contacts application,
// which has already been granted the appropriate WRITE_CONTACTS permission.
Intent(Intent.ACTION_INSERT).apply {
type = ContactsContract.Contacts.CONTENT_TYPE
}.also { intent ->
// Make sure that the user has a contacts application installed on their device.
intent.resolveActivity(packageManager)?.run {
startActivity(intent)
}
}
In addition, if our application needs to perform file-based I/O – such as accessing storage or choosing a file – it doesn’t need special permissions because the system can complete the operations on our application’s behalf. Better still, after a user selects content at a particular URI, the calling application gets granted permission to the selected resource.
2.2 Share data securely across applications
We can follow these best practices in order to share our application’s content with other applications in a more secure manner:
Enforce read-only or write-only permissions as needed.
Provide clients one-time access to data by using the FLAG_GRANT_READ_URI_PERMISSION and FLAG_GRANT_WRITE_URI_PERMISSION flags.
When sharing data, we use “content://” URIs, not “file://” URIs. Instances of FileProvider do this for us.
The following code snippet shows how to use URI permission grant flags and content provider permissions to display an application’s PDF file in a separate PDF Viewer application:
// Create an Intent to launch a PDF viewer for a file owned by this application.
Intent(Intent.ACTION_VIEW).apply {
data = Uri.parse("content://com.example/personal-info.pdf")
// This flag gives the started application read access to the file.
addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION)
}.also { intent ->
// Make sure that the user has a PDF viewer application installed on their device.
intent.resolveActivity(packageManager)?.run {
startActivity(intent)
}
}
3. Store data safely
Although our application might require access to sensitive user information, our users will grant our application access to their data only if they trust that we’ll safeguard it properly.
3.1 Store private data within internal storage
We need to store all private user data within the device’s internal storage, which is sandboxed per application. Our application doesn’t need to request permission to view these files, and other applications cannot access the files. As an added security measure, when the user uninstalls an app, the device deletes all files that the app saved within internal storage.
We consider working with EncryptedFile objects if storing data is particularly sensitive or private. These objects are available from Security library instead of File objects.
For Example, one way to write data to storage demonstrates in the below code snippet:
// Although you can define your own key generation parameter specification, it's
// recommended that you use the value specified here.
val keyGenParameterSpec = MasterKeys.AES256_GCM_SPEC
val mainKeyAlias = MasterKeys.getOrCreate(keyGenParameterSpec)
// Create a file with this name, or replace an entire existing file
// that has the same name. Note that you cannot append to an existing file,
// and the file name cannot contain path separators.
val fileToWrite = "my_sensitive_data.txt"
val encryptedFile = EncryptedFile.Builder(
File(DIRECTORY, fileToWrite),
applicationContext,
mainKeyAlias,
EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB
).build()
val fileContent = "MY SUPER-SECRET INFORMATION"
.toByteArray(StandardCharsets.UTF_8)
encryptedFile.openFileOutput().apply {
write(fileContent)
flush()
close()
}
Another example shows the inverse operation, reading data from storage:
// Although you can define your own key generation parameter specification, it's
// recommended that you use the value specified here.
val keyGenParameterSpec = MasterKeys.AES256_GCM_SPEC
val mainKeyAlias = MasterKeys.getOrCreate(keyGenParameterSpec)
val fileToRead = "my_sensitive_data.txt"
val encryptedFile = EncryptedFile.Builder(
File(DIRECTORY, fileToRead),
applicationContext,
mainKeyAlias,
EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB
).build()
val inputStream = encryptedFile.openFileInput()
val byteArrayOutputStream = ByteArrayOutputStream()
var nextByte: Int = inputStream.read()
while (nextByte != -1) {
byteArrayOutputStream.write(nextByte)
nextByte = inputStream.read()
}
val plaintext: ByteArray = byteArrayOutputStream.toByteArray()
3.2 Store data in external storage basedon use case
We consider external storage for large, non-sensitive files that are specific to our application, as well as files that our application shares with other applications. The specific APIs that we use depend on whether our application is designed to access app-specific files or access shared files.
3.2.1 Check availability of storage volume
When user interacts with a removable external storage device from the application, then he might remove the storage device while our app is trying to access it. We need to include logic to verify that the storage device is available.
3.2.2 Access application-specific files
If a file doesn’t contain private or sensitive information but provides value to the user only in our application, we store the file in an application-specific directory on external storage.
3.2.3 Access shared files
If our application needs to access or store a file that provides value to other applications, we can use one of the following APIs depending on our use case:
Media files: To store and access images, audio files, and videos that are shared between apps, use the Media Store API.
Other files: To store and access other types of shared files, including downloaded files, use the Storage Access Framework.
3.2.4 Check validity of data
If our application uses data from external storage, make sure that the contents of the data haven’t been corrupted or modified. Our application should also include logic to handle files that are no longer in a stable format.
We take an example of hash verifier in below code snippet:
val hash = calculateHash(stream)
// Store "expectedHash" in a secure location.
if (hash == expectedHash) {
// Work with the content.
}
// Calculating the hash code can take quite a bit of time, so it shouldn't
// be done on the main thread.
suspend fun calculateHash(stream: InputStream): String {
return withContext(Dispatchers.IO) {
val digest = MessageDigest.getInstance("SHA-512")
val digestStream = DigestInputStream(stream, digest)
while (digestStream.read() != -1) {
// The DigestInputStream does the work; nothing for us to do.
}
digest.digest().joinToString(":") { "%02x".format(it) }
}
}
3.3 Store only non-sensitive data in cache files
To provide quicker access to non-sensitive application data, we store it in the device’s cache. For caches larger than 1 MB in size, we use getExternalCacheDir() otherwise, use getCacheDir(). Each method provides the File object that contains our application’s cached data.
Let’s take one example code snippet that shows how to cache a file that application recently downloaded:
val cacheFile = File(myDownloadedFileUri).let { fileToCache ->
File(cacheDir.path, fileToCache.name)
}
If we use use getExternalCacheDir() to place our application’s cache within shared storage, the user might eject the media containing this storage while our application run. We should include logic to gracefully handle the cache miss that this user behavior causes.
3.4 Use SharedPreferences in private mode
When we are using getSharedPreferences() to create or access our application’s SharedPreferences objects, use MODE_PRIVATE. That way, only our application can access the information within the shared preferences file.
Moreover, EncryptedSharedPreferences should be used for more security which wraps the SharedPreferences class and automatically encrypts keys and values.
4. Keep services and dependencies up-to-date
Most applications use external libraries and device system information to complete specialized tasks. By keeping our app’s dependencies up to date, we make these points of communication more secure.
4.1 Check the Google Play services security provider
If our application uses Google Play services, make sure that it’s updated on the device where our application is installed. This check should be done asynchronously, off of the UI thread. If the device isn’t up-to-date, our application should trigger an authorization error.
4.2 Update all application dependencies
Before deploying our application, make sure that all libraries, SDKs, and other dependencies are up to date:
For first-party dependencies, such as the Android SDK, we use the updating tools found in Android Studio, such as the SDK Manager.
For third-party dependencies, we check the websites of the libraries that our app uses, and install any available updates and security patches.
In this article, we understood about the best practices of Application Security in Android. This article explained about few best practices that every mobile app developer must follow to secure the application from vulnerability. This helps us to develop the highly secure applications required to prevent valuable user information of our application and maintain the trust of our client.
Thanks for reading! I hope you enjoyed and learned about the best practices of Application Security concepts in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in A Short Note Series (An Overview Of Functions Currying In Kotlin).
In this note series, We will understand about Functions Currying in Kotlin. Currying is a common technique in functional programming. It allows transforming a given function that takes multiple arguments into a sequence of functions, each having a single argument.
In this short note series, we are going to implement an automatic currying mechanism that could be applied to any function taking three parameters.
So Let’s begin.
Overview
We can understand Currying as :
A common technique in functional programming.
Transforming a given function that takes multiple arguments into a sequence of functions, each having a single argument.
Each of the resulting functions handles one argument of the original (uncurried) function and returns another function.
To understand the concept of functions currying, let’s consider the following example of a function handling three parameters:
fun foo(a: A, b: B, c: C): D
Its curried form would like this :
fun carriedFoo(a: A): (B) -> (C) -> D
In other words, the curried form of the foo function would take a single argument of the A type and return another function of the following type: (B) -> (C) -> D. The returned function is responsible for handling the second argument of the original function and returns another function, which takes the third argument and returns a value of type D.
How To Implement it ?
In this section, we are going to implement the curried() extension function for the generic functional type declared as follows: ((P1, P2, P3). The curried() function is going to return a chain of single-argument functions and will be applicable to any function which takes three arguments.
Here, we can implement Functions Currying with the below steps :
Step 1 –Declare a header of the curried() function :
Let’s explore how to use the curried() function in action. In the following example, we are going to call curried() on the following function instance which is responsible for computing a sum of three integers:
fun sum(a: Int, b: Int, c: Int): Int = a + b + c
In order to obtain a curried form of the sum() function, we have to invoke the curried() function on its reference:
::sum.curried()
Then we can invoke the curried sum function in the following way:
val result: Int = ::sum.curried()(1)(2)(3)
Here, the result variable is going to be assigned an integer value equal to 6.
In order to invoke the curried() extension function, we access the sum() function reference using the :: modifier. Then we invoke the next functions from the function sequence returned by the curried function one by one.
The preceding code could be written in an equivalent more verbose form with explicit types declarations:
val sum3: (a: Int) -> (b: Int) -> (c: Int) -> Int = ::sum.curried()
val sum2: (b: Int) -> (c: Int) -> Int = sum3(1)
val sum1: (c: Int) -> Int = sum2(2)
val result: Int = sum1(3)
Under the hood, the currying mechanism implementation is just returning functions nested inside each other. Every time the specific function is invoked, it returns another function with the arity reduced by one.
Conclusion
In this note series, we understood about Functions Currying in Kotlin. Currying is useful whenever we can’t provide the full number of required arguments to the function in the current scope. We can apply only the available ones to the function and return the transformed function.
Thanks for reading! I hope you enjoyed and learned about Functions Currying in Kotlin. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find Other articles of CoolmonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Implement Automatic Functions Memoization Technique in Kotlin ?).
In this article, we will learn about how to implement Automatic Functions Memoization Technique in Kotlin. Memoization is a technique used to optimize the program-execution speed by caching the results of expensive function calls and reusing their ready values when they are required again. Although memoization causes an obvious trade-off between memory usage and computation time, often it’s crucial to provide the desired performance. This article explains about Automatic Functions Memoization Technique work flows in Kotlin.
To understand the Automatic Functions Memoization Technique in Kotlin, we cover the below topics as below :
Overview
How to Implement Technique?
How Technique Works?
A famous quote about learning is :
“That is what learning is. You suddenly understand something you’ve understood all your life, but in a new way.”
So Let’s begin.
Overview
We can understand about Memoization as:
A technique used to optimize the program-execution speed by caching the results of expensive function calls and reusing their ready values when they are required again.
An obvious trade-off between memory usage and computation time, often it’s crucial to provide the desired performance. Usually, we apply this pattern to computationally-expensive functions.
It can help to optimize recursive functions that call themselves multiple times with the same parameters values.
Memoization can easily be added internally to function implementation. However, in this article, we are going to create a general-purpose, reusable memoization mechanism that could be applied to any function.
How to Implement Technique?
In this section, We will understand how to implement it and What steps are required for implementation.
Step 1 – Declare a Memoizer class responsible for caching the results:
class Memoizer<P, R> private constructor() {
private val map = ConcurrentHashMap<P, R>()
private fun doMemoize(function: (P) -> R):
(P) -> R = { param: P ->
map.computeIfAbsent(param) { param: P ->
function(param)
}
}
companion object {
fun <T, U> memoize(function: (T) -> U):
(T) -> U =
Memoizer<T, U>().doMemoize(function)
}
}
Step 2 – Provide a memoized() extension function for the (P) -> R function type:
fun <P, R> ((P) -> R).memoized(): (P) -> R = Memoizer.memoize<P, R>(this)
How Technique Works?
The memoize() function takes an instance of a one-argument function as its argument. The Memoizer class contains the ConcurrentHashmap<P, R> instance, which is used to cache the function’s return values. The map stores functions passed to memoize() as arguments as the keys, and it puts their return values as its values.
First, the memoize() function looks up the value for a specific parameter of the function passed as an argument. If the value is present in the map, it is returned. Otherwise, the function is executed and its result is returned by memoize() and put into the map. This is achieved using the handy inline fun <K, V> ConcurrentMap<K, V>.computeIfAbsent(key: K, defaultValue: () -> V): V extension function provided by the standard library.
Additionally, we can provide an extension function, memoized(), for the Function1 type, which will allow us to apply the memoize() function directly to the function references.
Note that the under the hood functions in Kotlin are compiled to the FunctionN interface instances in the Java bytecode, where N corresponds to the number of function arguments. We can declare an extension function for a function. For example, in order to add an extension function for a function taking two arguments, (P, Q) -> R, we need to define an extension as fun <P, Q, R> Function2<P, Q, R>.myExtension(): MyReturnType.
Now, take a look at how we can benefit from the memoized() function in action. Consider a function that computes the factorial of an integer recursively:
fun factorial(n: Int): Long = if (n == 1) n.toLong() else n * factorial(n - 1)
We can apply the memoized() extension function to enable caching of the results:
As we can see, even though the second computation requires a higher number of recursive calls of the factorial() function, it takes much less time than the first computation.
We can implement similar automatic memoization implementations for other functions that take more than one argument. In order to declare an extension function for a function taking N arguments, we’d have to implement an extension function for the FunctionN type.
In this article, we understood about how to implement Automatic Functions Memoization Technique in Kotlin. This article explained about Automatic Functions Memoization Techniquework flows in Kotlin.
Thanks for reading! I hope you enjoyed and learned about Automatic Functions Memoization Technique concepts in Kotlin. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Implement Either Monad Design Pattern in Kotlin ?).
In this article, we will learn about how to implement Either Monad Design Pattern in Kotlin. The concept of Monad is one of the fundamental functional programming design patterns. We can understand a Monad as an encapsulation for a data type that adds a specific functionality to it or provides custom handlers for different states of the encapsulated object.
One of the most commonly used is a Maybe monad. The Maybe monad is supposed to provide information about the enclosed property presence. It can return an instance of the wrapped type whenever it’s available or nothing when it’s not. Java 8 introduced the Optional class, which is implementing the Maybe concept. It’s a great way to avoid operating on null values. This article explains about Either Monad Design Pattern work flows in Kotlin.
To understand the Either Monad Design Pattern in Kotlin, we cover the below topics as below :
Overview
Pattern Implementation Steps
How Pattern Works ?
A famous quote about learning is :
” The beautiful thing about learning is that nobody can take it away from you.”
So Let’s begin.
Overview
We can understand about Either Monad Design Pattern as:
A fundamental functional programming design patterns and,
Consider a Monad as an encapsulation for a data type that adds a specific functionality or provides custom handlers for different states of the encapsulated object.
However, apart from having the information about the unavailable state, we would often like to be able to provide some additional information. For example, if the server returns an empty response, it would be useful to get an error code or a message instead of the null or an empty response string. This is a scenario for another type of Monad, usually called Either, which we are going to implement in this article.
Pattern Implementation Steps
In this section, We will understand how to implement it and What steps are required for implementation.
Step 1 – Declare Either as a sealed class
sealed class Either<out E, out V>
Step 2 – Add two subclasses of Either, representing Error and Value:
sealed class Either<out L, out R> {
data class Left<out L>(val left: L) : Either<L, Nothing>()
data class Right<out R>(val right: R) : Either<Nothing, R>()
}
Step 3 – Add factory functions to conveniently instantiate Either:
sealed class Either<out L, out R> {
data class Left<out L>(val left: L) : Either<L, Nothing>()
data class Right<out R>(val right: R) : Either<Nothing, R>()
companion object {
fun <R> right(value: R): Either<Nothing, R> = Either.Right(value)
fun <L> left(value: L): Either<L, Nothing> = Either.Left(value)
}
}
How Pattern Works ?
In order to make use of the class Either and benefit from the Either.right() and Either.left() methods, we can implement a getEither() function that will try to perform some operation passed to it as a parameter. If the operation succeeds, it is going to return the Either.Right instance holding the result of the operation, otherwise, it is going to return Either.Left, holding a thrown exception instance:
By convention, we can use the Either.Right type to provide a default value and Either.Left to handle any possible edge cases.
One essential functional programming feature the Either Monad can provide is the ability to apply functions to its values. We can simply extend the Either class with the fold() function, which can take two functions as the parameters. The first function should be applied to the Either.Left type and the second should be applied to Either.Right:
sealed class Either<out L, out R> {
data class Left<out L>(val left: L) : Either<L, Nothing>()
data class Right<out R>(val right: R) : Either<Nothing, R>()
fun <T> fold(leftOp: (L) -> T, rightOp: (R) -> T): T = when (this) {
is Left -> leftOp(this.left)
is Right -> rightOp(this.right)
}
//…
}
The fold() function will return a value from either the leftOp or rightOp function, whichever is used. The usage of the fold() function can be illustrated with a server-request parsing example.
Suppose we have the following types declared:
data class Response(val json: JsonObject)
data class ErrorResponse(val code: Int, val message: String)
And we have also a function responsible for delivering a backend response:
fun someGetRequest(): Either<ErrorResponse, Response> = //..
We can use the fold() function to handle the returned value in the right way:
We can also extend the Either class with other useful functions, like the ones available in the standard library for data-processing operations—map, filter, and exists.
In this article, we understood about how to implement Either Monad Design Pattern in Kotlin. This article explained about Either Monad Design Pattern work flows in Kotlin.
Thanks for reading! I hope you enjoyed and learned about Either Monad Design Pattern concepts in Kotlin. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in A Short Note Series (An Overview Of Android Foreground Service Launch Restrictions).
In this note series, We will understand about Android Foreground Service Launch Restrictions in Android 12.
Apps that target Android 12 can no longer start foreground services while running in the background, except for a few special cases. If an app tries to start a foreground service while the app is running in the background, and the foreground service doesn’t satisfy one of the exceptional cases, the system throws a ForegroundServiceStartNotAllowedException.
So Let’s begin.
Recommended alternative to foreground services
If our app is affected by this change, we can migrate to using WorkManager. WorkManager is the recommended solution for starting higher-priority background tasks.
Starting in WorkManager 2.7.0, our app can call setExpedited() to declare that a Worker should use an expedited job. This new API uses expedited jobs when running on Android 12, and the API uses foreground services on prior versions of Android to provide backward compatibility.
The following code snippet shows an example of how to use the setExpedited() method:
Because the CoroutineWorker.setForeground() and ListenableWorker.setForegroundAsync() methods are backed by foreground services, they’re subject to the same foreground service launch restrictions and exemptions. We can use the API opportunistically, but be prepared to handle an exception if the system disallows our app from starting a foreground service. For a more consistent experience, use setExpedited().
Cases where foreground service launches from the background are allowed
In the following situations, our app can start foreground services even while our app is running in the background:
Our app transitions from a user-visible state, such as an activity.
App can start an activity from the background, except for the case where the app has an activity in the back stack of an existing task.
Our app receives a high-priority message using Firebase Cloud Messaging.
The user performs an action on a UI element related to our app. For example, they might interact with a bubble, notification, widget, or activity.
Our app receives an event that’s related to geofencing or activity recognition transition.
After the device reboots and receives the ACTION_BOOT_COMPLETED, ACTION_LOCKED_BOOT_COMPLETED, or ACTION_MY_PACKAGE_REPLACED intent action in a broadcast receiver.
Our app receives the ACTION_TIMEZONE_CHANGED, ACTION_TIME_CHANGED, or ACTION_LOCALE_CHANGED intent action in a broadcast receiver.
App receives a Bluetooth broadcast that requires the BLUETOOTH_CONNECTor BLUETOOTH_SCAN permissions.
Apps with certain system roles or permission, such as device owners and profile owners.
Our app uses the Companion Device Manager. To let the system wake our app whenever a companion device is nearby, implement the Companion Device Service in Android 12.
The system restarts a “sticky” foreground service. To make a foreground service sticky, return either START_STICKY or START_REDELIVER_INTENTfrom onStartCommand().
The user turns off battery optimizations for your app. We can help users find this option by sending them to our app’s App info page in system settings. To do so, invoke an intent that contains the ACTION_IGNORE_BATTERY_OPTIMIZATION_SETTINGS intent action.
Conclusion
In this note series, we understood about Android Foreground services launch restrictions in Android 12. We discussed about cases where foreground service launches from the background are allowed.
Thanks for reading! I hope you enjoyed and learned about Foreground service launch restrictions in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find Other articles of CoolmonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Schedule Tasks With WorkManager ?).
In this article, we will learn about how to schedule tasks with WorkManager. WorkManager is an API that makes it easy to schedule deferrable, asynchronous tasks that are expected to run even if the app exits or the device restarts. The WorkManager API is a suitable and recommended replacement for all previous Android background scheduling APIs, including FirebaseJobDispatcher, GcmNetworkManager, and Job Scheduler. WorkManager incorporates the features of its predecessors in a modern, consistent API that works back to API level 14 while also being conscious of battery life. This article explains about WorkManager workflow and advantages in Android application.
To understand the WorkManager in Android, we cover the below topics as below :
What is WorkManager?
Why use WorkManager ?
Features of WorkManager
When to use WorkManager?
How WorkManager works?
WorkManager implementation demo application to Schedule Tasks
Demo application output
A famous quote about learning is :
” There is no end to education. It is not that you read a book, pass an examination, and finish with education. The whole of life, from the moment you are born to the moment you die, is a process of learning. “
So Let’s begin.
What is WorkManager ?
“WorkManager is a background processing library which is used to execute background tasks which should run in a guaranteed way but not necessarily immediately.”
“WorkManager is a task scheduler that makes it easy to specify the asynchronous task easily and when they should run. The WorkManager API helps create the task and hand it to the Work Manager to run immediately or at an appropriate time as mentioned. “
With WorkManager, we can enqueue our background processing even when the app is not running and the device is rebooted for some reason. WorkManager also lets us define constraints necessary to run the task e.g. network availability before starting the background task.
For example, we might point our app to download new resources from the network from time to time and now the downloading is a task and we can set up this task to run at an appropriate time depending on the availability of the WIFI network or when the device is charging. So this way we can schedule a task using WorkManager.
WorkManager is a part of Android Jetpack (a suite of libraries to guide developers to write quality apps) and is one of theAndroid Architecture Components (collection of components that help developers design robust, testable, and easily maintainable apps).
If our app targets Android 10 (API level 29) or above, our FirebaseJobDispatcher and GcmNetworkManager API calls will no longer work on devices running Android Marshmallow (6.0) and above.
Why use WorkManager ?
Since Marshmallow, The Android development team is continuously working on battery optimizations. After that team introduced Doze mode. Then in Oreo imposed various kind of limitation on performing background jobs. Before WorkManager, we use various job scheduler for performing background task, such as Firebase JobDispatcher, Job Scheduler and Alarm Manager + Broadcast receivers. So for the developer perspective, it is difficult to choose which scheduler should use and which one is good. So the Work Manager handles these kinds of stuff. We have to pass the task to the WorkManager and It uses all this Firebase Job Dispatcher, Alarm Manager + Broadcast Receivers, Job Scheduler to perform the background task depending on the requirement.
Features of WorkManager
In addition to providing a simpler and consistent API, WorkManager has a number of other key benefits, including:
Work Constraints
Declaratively define the optimal conditions for our work to run using Work Constraints. (For example, run only when the device is Wi-Fi, when the device idle, or when it has sufficient storage space, etc.)
Robust Scheduling
WorkManager allows us to schedule work to run one- time or repeatedly using flexible scheduling windows. Work can be tagged and named as well, allowing us to schedule unique, replaceable work and monitor or cancel groups of work together. Scheduled work is stored in an internally managed SQLite database and WorkManager takes care of ensuring that this work persists and is rescheduled across device reboots. In addition, WorkManager adheres to power-saving features and best practices like Doze mode, so we don’t have to worry about it.
Flexible Retry Policy
WorkManager offers flexible retry policies, including a configurable exponential backoff policy, if work fails.
Work Chaining
For complex related work, chain individual work tasks together using a fluent, natural, interface that allows us to control which pieces run sequentially and which run in parallel. For each work task, we can define input and output data for that work. When chaining work together, WorkManager automatically passes output data from one work task to the next.
Built-In Threading Interoperability
WorkManager integrates seamlessly with RxJava and Coroutines and provides the flexibility to plug in our own asynchronous APIs.
When to use WorkManager?
WorkManager handles background work that needs to run when various constraints are met, regardless of whether the application process is alive or not. Background work can be started when the app is in the background, when the app is in the foreground, or when the app starts in the foreground but goes to the background. Regardless of what the application is doing, background work should continue to execute, or be restarted if Android kills its process.
A common confusion about WorkManager is that it’s for tasks that needs to be run in a “background” thread but don’t need to survive process death. This is not the case. There are other solutions for this use case like Kotlin’s coroutines, ThreadPools, or libraries like RxJava. There are many different situations in which we need to run background work, and therefore different solutions for running background work.
WorkManager can be a perfect background processing library to use in android when our task:
Does not need to run at a specific time
Can be deferred to be executed
Is guaranteed to run even after the app is killed or device is restarted
Has to meet constraints like battery supply or network availability before execution
The simplest example can be when our app needs to upload a large chunk of user data to the server. This particular use case meets the criteria we mentioned above to choose WorkManager because:
Results need not be reflected immediately in our Android app
Data needs to be uploaded even when the upload begins and the user kills the app to work on some other app, and
The network needs to be available in order to upload data on the server.
How WorkManager Works ?
In this section, we will understand the class and concept of WorkManager. Let’s understand what are various base classes that are used for Job Scheduling.
Worker
Work is defined using the Worker class. It specifies what task to perform. The WorkManager API include an abstract Worker class and we need to extends this class and perform the work.
WorkRequest
WorkRequest represents an individual task that is to be performed. Now this WorkRequest, we can add values details for the work. Such as constraint or we can also add data while creating the request.
WorkRequest can be of to type :
OneTimeWorkRequest– That means we requesting for non-repetitive work.
PeriodicWorkRequest– This class is used for creating a request for repetitive work.
WorkManager
The WorkManager class in enqueues and manages all the work request. We pass work request object to this WorkManager to enqueue the task.
WorkInfo
WorkInfo contains the information about a particular task, The WorkManager provides LiveData for each of the work request objects, We can observe this and get the current status of the task.
WorkManager Implementation Demo Application to Schedule Tasks
In this section, we will discuss the below steps for implement WorkManager to schedule task with authentic android demo application :
Add WorkManager dependency in app/buid.gradle file
Create Layout
Add a base class of Worker
Create WorkRequest
Enqueue the request with WorkManager
Fetch the particular task status
1. Add WorkManager dependency in app/buid.gradle file
For using WorkManager we have to add dependency in app/build.gradle file. So let’s open the app build.gradle file and add below lines.
In this step, we will create a layout. This layout will contain TextView and Button. After that, we will set onClickListener() and this event will enqueue the WorkRequest to WorkManager and shows the status on TextView.
In this step, we create a base class of Worker class and override un-implemented methods and super constructor.
NotificationWorker.kt
package com.coolmonktechie.android.workmanagerdemo
import android.app.NotificationChannel
import android.app.NotificationManager
import android.content.Context
import android.os.Build
import androidx.core.app.NotificationCompat
import androidx.work.Data
import androidx.work.Worker
import androidx.work.WorkerParameters
class NotificationWorker(context: Context, workerParams: WorkerParameters) :
Worker(context, workerParams) {
override fun doWork(): Result {
val taskData = inputData
val taskDataString = taskData.getString(MainActivity.MESSAGE_STATUS)
showNotification("WorkManager", "Message has been Sent")
val outputData = Data.Builder().putString(WORK_RESULT, "Jobs Finished").build()
return Result.success(outputData)
}
private fun showNotification(task: String, desc: String) {
val manager =
applicationContext.getSystemService(Context.NOTIFICATION_SERVICE) as NotificationManager
val channelId = "task_channel"
val channelName = "task_name"
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
val channel =
NotificationChannel(channelId, channelName, NotificationManager.IMPORTANCE_DEFAULT)
manager.createNotificationChannel(channel)
}
val builder = NotificationCompat.Builder(applicationContext, channelId)
.setContentTitle(task)
.setContentText(desc)
.setSmallIcon(R.mipmap.ic_launcher)
manager.notify(1, builder.build())
}
companion object {
private const val WORK_RESULT = "work_result"
}
}
4. Create WorkRequest
Let’s move to MainActivity and create a WorkRequest to execute the work that we just created. Now first we will create WorkManager. This work manager will enqueue and manage our work request.
var mWorkManager:WorkManager = WorkManager.getInstance()
Now we will create OneTimeWorkRequest, because we want to create a task that will be executed just once.
var mRequest: OneTimeWorkRequest = OneTimeWorkRequest.Builder(NotificationWorker::class.java).build()
Using this code, we built work request that will be executed one time only.
5. Enqueue the request with WorkManager
In this step, we discuss onClick() of the button. we will enqueue this request using the WorkManager. So that’s all we need to do.
mWorkManager!!.enqueue(mRequest!!)
6. Fetch the particular task status
In this steps, we will fetch some information about this particular task and display it on tvStatus TextView. We will do that using WorkInfo class. The work manager provides LiveData for each of the work request objects, We can observe this and get the current status of the task.
mWorkManager!!.getWorkInfoByIdLiveData(mRequest!!.id).observe(this, { workInfo ->
if (workInfo != null) {
val state = workInfo.state
tvStatus!!.append(
"""
$state
""".trimIndent()
)
}
})
Finally, the full source code of MainActivity.kt looks like this:
package com.coolmonktechie.android.workmanagerdemo
import android.os.Build
import android.os.Bundle
import android.view.View
import android.widget.Button
import android.widget.TextView
import androidx.appcompat.app.AppCompatActivity
import androidx.work.Constraints
import androidx.work.NetworkType
import androidx.work.OneTimeWorkRequest
import androidx.work.WorkManager
import com.sunilmishra.android.workmanagerdemo.NotificationWorker
class MainActivity : AppCompatActivity(), View.OnClickListener {
var tvStatus: TextView? = null
var btnSend: Button? = null
var btnStorageNotLow: Button? = null
var btnBatteryNotLow: Button? = null
var btnRequiresCharging: Button? = null
var btnDeviceIdle: Button? = null
var btnNetworkType: Button? = null
var mRequest: OneTimeWorkRequest? = null
var mWorkManager: WorkManager? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
initViews()
tvStatus = findViewById(R.id.tvStatus)
btnSend = findViewById(R.id.btnSend)
mWorkManager = WorkManager.getInstance()
}
private fun initViews() {
tvStatus = findViewById(R.id.tvStatus)
btnSend = findViewById(R.id.btnSend)
btnStorageNotLow = findViewById(R.id.buttonStorageNotLow)
btnBatteryNotLow = findViewById(R.id.buttonBatteryNotLow)
btnRequiresCharging = findViewById(R.id.buttonRequiresCharging)
btnDeviceIdle = findViewById(R.id.buttonDeviceIdle)
btnNetworkType = findViewById(R.id.buttonNetworkType)
btnSend!!.setOnClickListener(this)
btnStorageNotLow!!.setOnClickListener(this)
btnBatteryNotLow!!.setOnClickListener(this)
btnRequiresCharging!!.setOnClickListener(this)
btnDeviceIdle!!.setOnClickListener(this)
btnNetworkType!!.setOnClickListener(this)
}
override fun onClick(v: View) {
tvStatus!!.text = ""
val mConstraints: Constraints
when (v.id) {
R.id.btnSend -> mRequest =
OneTimeWorkRequest.Builder(NotificationWorker::class.java).build()
R.id.buttonStorageNotLow -> {
/**
* Constraints
* If TRUE task execute only when storage's is not low
*/
mConstraints = Constraints.Builder().setRequiresStorageNotLow(true).build()
/**
* OneTimeWorkRequest with requiresStorageNotLow Constraints
*/
mRequest = OneTimeWorkRequest.Builder(NotificationWorker::class.java)
.setConstraints(mConstraints).build()
}
R.id.buttonBatteryNotLow -> {
/**
* Constraints
* If TRUE task execute only when battery isn't low
*/
mConstraints = Constraints.Builder().setRequiresBatteryNotLow(true).build()
/**
* OneTimeWorkRequest with requiresBatteryNotLow Constraints
*/
mRequest = OneTimeWorkRequest.Builder(NotificationWorker::class.java)
.setConstraints(mConstraints).build()
}
R.id.buttonRequiresCharging -> {
/**
* Constraints
* If TRUE while the device is charging
*/
mConstraints = Constraints.Builder().setRequiresCharging(true).build()
/**
* OneTimeWorkRequest with requiresCharging Constraints
*/
mRequest = OneTimeWorkRequest.Builder(NotificationWorker::class.java)
.setConstraints(mConstraints).build()
}
R.id.buttonDeviceIdle -> if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
/**
* Constraints
* If TRUE while the device is idle
*/
mConstraints = Constraints.Builder().setRequiresDeviceIdle(true).build()
/**
* OneTimeWorkRequest with requiresDeviceIdle Constraints
*/
mRequest = OneTimeWorkRequest.Builder(NotificationWorker::class.java)
.setConstraints(mConstraints).build()
}
R.id.buttonNetworkType -> {
/**
* Constraints
* Network type is conneted
*/
mConstraints =
Constraints.Builder().setRequiredNetworkType(NetworkType.CONNECTED).build()
/**
* OneTimeWorkRequest with requiredNetworkType Connected Constraints
*/
mRequest = OneTimeWorkRequest.Builder(NotificationWorker::class.java)
.setConstraints(mConstraints).build()
}
else -> {
}
}
/**
* Fetch the particular task status using request ID
*/
mWorkManager!!.getWorkInfoByIdLiveData(mRequest!!.id).observe(this, { workInfo ->
if (workInfo != null) {
val state = workInfo.state
tvStatus!!.append(
"""
$state
""".trimIndent()
)
}
})
/**
* Enqueue the WorkRequest
*/
mWorkManager!!.enqueue(mRequest!!)
}
companion object {
const val MESSAGE_STATUS = "message_status"
}
}
Demo Application Output
In this section, we will see the demo application output screen as below. We click on Send Notification button. The Job status will show on TextView.
In this article, we understood about how to schedule tasks with WorkManager. This article explained about WorkManager workflow and advantages in Android application.
Thanks for reading! I hope you enjoyed and learned about WorkManager concepts in Android. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :
Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Run Android Tasks In Background Threads ?).
In this article, we will learn about how to run android tasks in background threads. All Android apps use a main thread to handle UI operations. Calling long-running operations from this main thread can lead to freezes and unresponsiveness. For example, if our app makes a network request from the main thread, our app’s UI is frozen until it receives the network response. We can create additional background threads to handle long-running operations while the main thread continues to handle UI updates.
This article shows both Kotlin and Java Programming Language developers to use of a thread pool to set up and use multiple threads in an Android app. It also explains code definitions to run on a thread and communications between one of these threads and the main thread.
A famous quote about learning is :
” That is what learning is. You suddenly understand something you’ve understood all your life, but in a new way. “
So Let’s begin.
Overview
In this example section, we will make a network request and return the result to the main thread, where the app then might display that result on the screen. Specifically, the ViewModel calls the repository layer on the main thread to trigger the network request. The repository layer is in charge of moving the execution of the network request off the main thread and posting the result back to the main thread using a callback.
To move the execution of the network request off the main thread, we need to create other threads in our app.
Creating Multiple Threads
A thread pool is a managed collection of threads that runs tasks in parallel from a queue. New tasks are executed on existing threads as those threads become idle. To send a task to a thread pool, use the ExecutorService interface. Note that ExecutorService has nothing to do with Services, the Android application component.
Creating threads is expensive, so we should create a thread pool only once as our app initializes. Be sure to save the instance of the ExecutorService either in our Applicationclass or in a dependency injection container. The following example creates a thread pool of four threads that we can use to run background tasks.
class MyApplication : Application() {
val executorService: ExecutorService = Executors.newFixedThreadPool(4)
}
Executing In A Background Thread
Making a network request on the main thread causes the thread to wait, or block, until it receives a response. Since the thread is blocked, the OS can’t call onDraw(), and our app freezes, potentially leading to an Application Not Responding (ANR) dialog. Instead, let’s run this operation on a background thread.
First, let’s take a look at our Repository class and see how it’s making the network request:
sealed class Result<out R> {
data class Success<out T>(val data: T) : Result<T>()
data class Error(val exception: Exception) : Result<Nothing>()
}
class LoginRepository(private val responseParser: LoginResponseParser) {
private const val loginUrl = "https://example.com/login"
// Function that makes the network request, blocking the current thread
fun makeLoginRequest(
jsonBody: String
): Result<LoginResponse> {
val url = URL(loginUrl)
(url.openConnection() as? HttpURLConnection)?.run {
requestMethod = "POST"
setRequestProperty("Content-Type", "application/json; charset=utf-8")
setRequestProperty("Accept", "application/json")
doOutput = true
outputStream.write(jsonBody.toByteArray())
return Result.Success(responseParser.parse(inputStream))
}
return Result.Error(Exception("Cannot open HttpURLConnection"))
}
}
makeLoginRequest() is synchronous and blocks the calling thread. To model the response of the network request, we have our own Result class.
The ViewModel triggers the network request when the user taps, for example, on a button:
class LoginViewModel(
private val loginRepository: LoginRepository
) {
fun makeLoginRequest(username: String, token: String) {
val jsonBody = "{ username: \"$username\", token: \"$token\"}"
loginRepository.makeLoginRequest(jsonBody)
}
}
With the previous code, LoginViewModel is blocking the main thread when making the network request. We can use the thread pool that we’ve instantiated to move the execution to a background thread. First, following the principles of dependency injection, LoginRepository takes an instance of Executor as opposed to ExecutorService because it’s executing code and not managing threads:
class LoginRepository(
private val responseParser: LoginResponseParser
private val executor: Executor
) { ... }
The Executor’s execute() method takes a Runnable. A Runnable is a Single Abstract Method (SAM) interface with a run() method that is executed in a thread when invoked.
Let’s create another function called makeLoginRequest() that moves the execution to the background thread and ignores the response for now:
class LoginRepository(
private val responseParser: LoginResponseParser
private val executor: Executor
) {
fun makeLoginRequest(jsonBody: String) {
executor.execute {
val ignoredResponse = makeSynchronousLoginRequest(url, jsonBody)
}
}
private fun makeSynchronousLoginRequest(
jsonBody: String
): Result<LoginResponse> {
... // HttpURLConnection logic
}
}
Inside the execute() method, we create a new Runnable with the block of code, we want to execute in the background thread—in our case, the synchronous network request method. Internally, the ExecutorService manages the Runnable and executes it in an available thread.
In Kotlin, we can use a lambda expression to create an anonymous class that implements the SAM interface.
Considerations
Any thread in our app can run in parallel to other threads, including the main thread, so we should ensure that our code is thread-safe. Notice that in our example that we avoid writing to variables shared between threads, passing immutable data instead. This is a good practice, because each thread works with its own instance of data, and we avoid the complexity of synchronization.
If we need to share state between threads, we must be careful to manage access from threads using synchronization mechanisms such as locks. In general we should avoid sharing mutable state between threads whenever possible.
Communicating With The Main Thread
In the previous step, we ignored the network request response. To display the result on the screen, LoginViewModel needs to know about it. We can do that by using callbacks.
The function makeLoginRequest() should take a callback as a parameter so that it can return a value asynchronously. The callback with the result is called whenever the network request completes or a failure occurs. In Kotlin, we can use a higher-order function.
class LoginRepository(
private val responseParser: LoginResponseParser
private val executor: Executor
) {
fun makeLoginRequest(
jsonBody: String,
callback: (Result<LoginResponse>) -> Unit
) {
executor.execute {
try {
val response = makeSynchronousLoginRequest(jsonBody)
callback(response)
} catch (e: Exception) {
val errorResult = Result.Error(e)
callback(errorResult)
}
}
}
...
}
The ViewModel needs to implement the callback now. It can perform different logic depending on the result:
class LoginViewModel(
private val loginRepository: LoginRepository
) {
fun makeLoginRequest(username: String, token: String) {
val jsonBody = "{ username: \"$username\", token: \"$token\"}"
loginRepository.makeLoginRequest(jsonBody) { result ->
when(result) {
is Result.Success<LoginResponse> -> // Happy path
else -> // Show error in UI
}
}
}
}
In this example, the callback is executed in the calling thread, which is a background thread. This means that we cannot modify or communicate directly with the UI layer until we switch back to the main thread.
To communicate with the View from the ViewModel layer, use LiveData as recommended in the updated app architecture. If the code is being executed on a background thread, we can call MutableLiveData.postValue() to communicate with the UI layer.
Using Handlers
We can use a Handler to enqueue an action to be performed on a different thread. To specify the thread on which to run the action, construct the Handler using a Looper for the thread. A Looper is an object that runs the message loop for an associated thread. Once we’ve created a Handler, we can then use the post(Runnable) method to run a block of code in the corresponding thread.
Looper includes a helper function, getMainLooper(), which retrieves the Looper of the main thread. We can run code in the main thread by using this Looper to create a Handler. As this is something we might do quite often, we can also save an instance of the Handler in the same place we saved the ExecutorService:
class MyApplication : Application() {
val executorService: ExecutorService = Executors.newFixedThreadPool(4)
val mainThreadHandler: Handler = HandlerCompat.createAsync(Looper.getMainLooper())
}
It’s a good practice to inject the handler to the Repository, as it gives us more flexibility. For example, in the future we might want to pass in a different Handler to schedule tasks on a separate thread. If we’re always communicating back to the same thread, we can pass the Handler into the Repository constructor, as shown in the following example.
class LoginRepository(
...
private val resultHandler: Handler
) {
fun makeLoginRequest(
jsonBody: String,
callback: (Result<LoginResponse>) -> Unit
) {
executor.execute {
try {
val response = makeSynchronousLoginRequest(jsonBody)
resultHandler.post { callback(response) }
} catch (e: Exception) {
val errorResult = Result.Error(e)
resultHandler.post { callback(errorResult) }
}
}
}
...
}
Alternatively, if we want more flexibility, we can pass in a Handler to each function:
class LoginRepository(...) {
...
fun makeLoginRequest(
jsonBody: String,
resultHandler: Handler,
callback: (Result<LoginResponse>) -> Unit
) {
executor.execute {
try {
val response = makeSynchronousLoginRequest(jsonBody)
resultHandler.post { callback(response) }
} catch (e: Exception) {
val errorResult = Result.Error(e)
resultHandler.post { callback(errorResult) }
}
}
}
}
In this example, the callback passed into the Repository’s makeLoginRequest call is executed on the main thread. That means we can directly modify the UI from the callback or use LiveData.setValue() to communicate with the UI.
Configuring A Thread Pool
We can create a thread pool using one of the Executor helper functions with predefined settings, as shown in the previous example code. Alternatively, if we want to customize the details of the thread pool, we can create an instance using ThreadPoolExecutor directly. We can configure the following details:
Initial and maximum pool size
Keep alive time and time unit. Keep alive time is the maximum duration that a thread can remain idle before it shuts down.
An input queue that holds Runnable tasks. This queue must implement the BlockingQueue interface. To match the requirements of our app, we can choose from the available queue implementations.
Here’s an example that specifies thread pool size based on the total number of processor cores, a keep alive time of one second, and an input queue.
class MyApplication : Application() {
/*
* Gets the number of available cores
* (not always the same as the maximum number of cores)
*/
private val NUMBER_OF_CORES = Runtime.getRuntime().availableProcessors()
// Instantiates the queue of Runnables as a LinkedBlockingQueue
private val workQueue: BlockingQueue<Runnable> =
LinkedBlockingQueue<Runnable>()
// Sets the amount of time an idle thread waits before terminating
private const val KEEP_ALIVE_TIME = 1L
// Sets the Time Unit to seconds
private val KEEP_ALIVE_TIME_UNIT = TimeUnit.SECONDS
// Creates a thread pool manager
private val threadPoolExecutor: ThreadPoolExecutor = ThreadPoolExecutor(
NUMBER_OF_CORES, // Initial pool size
NUMBER_OF_CORES, // Max pool size
KEEP_ALIVE_TIME,
KEEP_ALIVE_TIME_UNIT,
workQueue
)
}
Concurrency Libraries
It’s important to understand the basics of threading and its underlying mechanisms. There are, however, many popular libraries that offer higher-level abstractions over these concepts and ready-to-use utilities for passing data between threads. These libraries include Guava and RxJava for the Java Programming Language users and coroutines, which we recommend for Kotlin users.
In practice, we should pick the one that works best for our app and our development team, though the rules of threading remain the same.
In this article, we understood about how to run android tasks in background threads. This article showed both Kotlin and Java Programming Language developers to use of a thread pool to set up and use multiple threads in an Android app. It also explained code definitions to run on a thread and communications between one of these threads and the main thread.
Thanks for reading! I hope you enjoyed and learned about RunningAndroid Tasks In Background Threads concepts. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
You can find other articles of CoolMonkTechie as below link :