JavaScript – How To Pass Arbitrary Parameters To A Function In JavaScript ?

Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Pass Arbitrary Parameters To A Function In JavaScript ?).

In this article, we will learn how to pass arbitrary parameters to a function using Rest Parameters and Spread Operator in JavaScript. All the programming languages provide different ways to pass an arbitrary, indefinite number of parameters to a function. Javascript has also provided concepts that make the “passing of arbitrary parameters” to a function very easy. We will also understand how to handle the arbitrary parameters using the “Rest parameter” in JavaScript.

In this article, we will discuss the below items to understand the concepts of handling the arbitrary parameters:

  • Handle arbitrary parameters using the “arguments” variable in JavaScript
  • Handle the arbitrary parameters using the “rest parameter” in JavaScript
  • Expand iterable objects using the “spread operator” in JavaScript

A famous quote about learning is :

” There is no end to education. It is not that you read a book, pass an examination, and finish with education. The whole of life, from the moment you are born to the moment you die, is a process of learning.”

So Let’s begin.


Handle arbitrary parameters using the “arguments” variable in JavaScript

Arguments are a special type of in-built variable available in JavaScript, which can handle an arbitrary number of parameters in a function. Consider a scenario where the function definition doesn’t accept any parameters, but at run time, the caller wants to pass some parameters. Such kind of parameters can be accessed in the function definition using the arguments variable. One can access the arguments with the help of the index of the arguments variable.

Let’s understand the usage of “arguments” with the help of the following example:

<html>
 
   <body>  
 
      Demonstrating arguments keyword in javascript </br> </br>
 
      <script type='text/javascript'>
 
        function display(){
 			document.write(arguments[0] + " " + arguments[1]);
 		}
 
        display("Blog","Coolmonktechie");
 
      </script>   
 
   </body>
 
</html>

In the above example, We can see that the display method doesn’t have any parameters. Still, while calling, we are sending two parameters, and we can access it by using the arguments variable with the help of the index.

Even though the “arguments” variable is both array-like and iterable, it’s not an array. Additionally, it does not support array methods, so we can’t call arguments.map(...) for example.


Handle arbitrary parameters using the “rest parameter” in JavaScript

The “rest parameter” is the advanced way provided by JavaScript to handle the arbitrary number of parameters. Its syntax looks like below:

function functionName(...args){

//Statements need to get executed

}

Let’s try to understand the details and usage of the rest parameter with the help of the following example:

Consider a case where we need to perform a multiplication operation; the function will look like:

function multiply(variable1, variable2){

    return variable1*variable2;

}

Now, the above code perfectly works when there are only 2 arguments, But imagine there is a situation where we need to multiply n number of variables, and n can be different for each time one invokes the function. In this case, the number of arguments entirely depends on the caller.

To achieve this, we can use the rest parameter, as shown below:

Example: when the function just accepts the “rest” parameter

<html>

   <body>  

      Demonstrating rest operator in javascript

   </br>

   </br>

      <script type='text/javascript'>

        function multiply(...variables){

           var output =1;

           for(x of variables){

               output*=x;

           }

           return output;

        }

        document.write("Multiplication of 2 variables 3 and 5 is "+multiply(3,5));

        document.write("</br>");

        document.write("Multiplication of 3 variables 3,2 and 5 is "+multiply(3,2,5));

        document.write("</br>");

        document.write("Multiplication of 0 variables is "+multiply());

      </script>   

   </body>

</html>

In the above example, we can see that there is only one multiply function, but it can take n number of variables, and the caller will decide these variables. The “rest” parameter is being specified by the “...variables” in the multiply method, and it can accept zero or more parameters.

The “rest” parameter can be combined with other parameters of the function as well, but it will always be the last parameter of the function. Let’s understand the same with the help of the following example:

Example: when the function accepts named parameters with the rest parameter

Let’s consider an example that takes two mandatory parameters, and the last variable is a “rest” parameter. The following code snippet shows the usage of the “rest” parameter as the last parameter of the function:

<html>

   <body>  

      Demonstrating rest operator in javascript

   </br>

   </br>

      <script type='text/javascript'>

        function multiply(var1,var2,...variables){

           document.write(var1 +" "+var2);

           document.write("</br>");

           var output =1;

           for(x of variables){

               output*=x;

           }

           return output;

        }

        document.write("Multiplication of 2 variables 3 and 5 is "+multiply("Blog","Testing",3,5));

        document.write("</br>");

        document.write("Multiplication of 3 variables 3,2 and 5 is "+multiply("Java","Script",3,2,5));

        document.write("</br>");

        document.write("Multiplication of 0 variables is "+multiply("Blog","Output"));

      </script>   

   </body>

</html>

As we can see in the above code snippet, the multiple functions are accepting var1 and var2 as mandatory parameters, and the last parameter “…variables” is the “rest” parameter. The user can invoke the function by passing the two necessary parameters and the rest parameters as any arbitrary number of parameters.


Expand iterable objects using the “spread operator” in JavaScript

Till now, we have seen how to get an array from the list of parameters. But sometimes we need to do precisely the reverse.

For example, the built-in function Math.max that returns the greatest number from a list:

Math.max(3, 5, 1) // Returns 5

Now let’s say we have an array [3,5,1]. How do we call “Math.max” with it? Passing it “as is” won’t work, because “Math.max” expects a list of numeric arguments and not a single array object:

let arr = [3, 5, 1];

Math.max(arr); // NaN

And surely we can’t manually list items in the code Math.max(arr[0], arr[1], arr[2]), because we may be unsure how many parameters are there. When the function invokes, there could be a lot, or there could be no parameters.

The spread syntax can handle such a scenario. It appears similar to rest parameters, also uses “” but does quite the opposite. When “…arr” is used in the function call, it “expands” an iterable object “arr” into the list of arguments. Its syntax looks like below:

Syntax:

var array=[va1,val2,val3…,valN];

callingFunction(...array); //...variablename convert array to list of arguments

For “Math.max” it will look like:

let arr = [3, 5, 1];

Math.max(...arr); // 5 (spread turns array into a list of arguments)

We also can pass multiple iterable objects this way:

let arr1 = [1, -2, 3, 4];
let arr2 = [8, 3, -8, 1];

Math.max(...arr1, ...arr2); // 8

We can even combine/ join the spread syntax with normal values:

let arr1 = [1, -2, 3, 4];
let arr2 = [8, 3, -8, 1];

Math.max(1, ...arr1, 2, ...arr2, 25); // 25

Let’s understand the detailed usage of the “spread” syntax with the help of the following example:

Example:

<html>

   <body>  

      Demonstrating spread operator in javascript

   </br>

   </br>

      <script type='text/javascript'>

        function display(...args){//This is rest operator

         document.write("Total number of parameters are "+args.length);

         document.write("</br>");

           for(x of args){

              document.write(x+" ");

           }

        }

        var data = ["Blog","CoolmonkTechie","JavaScript","Tutorial"];

        display(...data);//This is spread operator

       

      </script>   

   </body>

</html>

As we can see in the above example,  we are passing data array to the “display” function with the help of the spread operator, and the data array converts to a list of arguments.

That’s all about in this article.


Conclusion

In this article, We understood how to pass arbitrary parameters to a function using Rest Parameters and Spread Operator in JavaScript. We conclude that :

  • The “arguments” is a special array-like object that contains all the parameters passed by their index.
  • The “rest” parameters pass and handle an arbitrary number of arguments. 
  • The “spread” operator is used to convert the iterable objects like an array to a list of independent parameters.

Thanks for reading !! I hope you enjoyed and learned about Rest Parameters and Spread Operator Concept in javascript. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

ReactJS – How To Use Axios with ReactJS ?

Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Use Axios with ReactJS ?).

In this article, we will understand how to use Axios with ReactJS. Many projects on the web need to interface with a REST API at some stage in their development. Axios is a lightweight HTTP client based on the $http service within Angular.js v1.x and is similar to the native JavaScript Fetch API. In this article, we will see examples of how to use Axios to access the popular JSON Placeholder API within a React application.

A famous quote about learning is :

” Change is the end result of all true learning.”

So Let’s begin.


Introduction

Axios is promise-based, which gives you the ability to take advantage of JavaScript’s async and await for more readable asynchronous code.

We can also intercept and cancel requests, and there’s built-in client-side protection against cross-site request forgery.


Prerequisites

To follow along with this article, we’ll need the following:

  • Node.js version 10.16.0 installed on our computer.
  • A new React project set up with Create React App.
  • It will also help to have a basic understanding of JavaScript along with a basic knowledge of HTML and CSS.


Steps to Use Axios library


Step 1 – Adding Axios to the Project

In this section, we will add Axios to the axios-tutorial React project. To add Axios to the project, open our terminal and change directories into our project:

$ cd axios-tutorial

Then run this command to install Axios:

$ npm install axios

Next, we will need to import Axios into the file we want to use it in.


Step 2 – Making a GET Request

In this example, we create a new component and import Axios into it to send a GET request.

Inside the src folder of our React project, create a new component named PersonList.js:

src/PersonList.js

import React from 'react';

import axios from 'axios';

export default class PersonList extends React.Component {
  state = {
    persons: []
  }

  componentDidMount() {
    axios.get(`https://jsonplaceholder.typicode.com/users`)
      .then(res => {
        const persons = res.data;
        this.setState({ persons });
      })
  }

  render() {
    return (
      <ul>
        { this.state.persons.map(person => <li>{person.name}</li>)}
      </ul>
    )
  }
}

First, we import React and Axios so that both can be used in the component. Then we hook into the componentDidMount lifecycle hook and perform a GET request.

We use axios.get(url) with a URL from an API endpoint to get a promise which returns a response object. Inside the response object, there is data that is then assigned the value of person.

We can also get other information about the request, such as the status code under res.status or more information inside of res.request.


Step 3 – Making a POST Request

In this step, we will use Axios with another HTTP request method called POST.

Remove the previous code in PersonList and add the following to create a form that allows for user input and subsequently POSTs the content to an API:

src/PersonList.js

import React from 'react';
import axios from 'axios';

export default class PersonList extends React.Component {
  state = {
    name: '',
  }

  handleChange = event => {
    this.setState({ name: event.target.value });
  }

  handleSubmit = event => {
    event.preventDefault();

    const user = {
      name: this.state.name
    };

    axios.post(`https://jsonplaceholder.typicode.com/users`, { user })
      .then(res => {
        console.log(res);
        console.log(res.data);
      })
  }

  render() {
    return (
      <div>
        <form onSubmit={this.handleSubmit}>
          <label>
            Person Name:
            <input type="text" name="name" onChange={this.handleChange} />
          </label>
          <button type="submit">Add</button>
        </form>
      </div>
    )
  }
}

Inside the handleSubmit function, we prevent the default action of the form. Then update the state to the user input.

Using POST gives us the same response object with information that we can use inside of a then call.

To complete the POST request, we first capture the user input. Then we add the input along with the POST request, which will give us a response. We can then console.log the response, which should show the user input in the form.


Step 4 – Making a DELETE Request

In this example, we will see how to delete items from an API using axios.delete and passing a URL as a parameter.

Change the code for the form from the POST example to delete a user instead of adding a new one:

src/PersonList.js

import React from 'react';
import axios from 'axios';

export default class PersonList extends React.Component {
  state = {
    id: '',
  }

  handleChange = event => {
    this.setState({ id: event.target.value });
  }

  handleSubmit = event => {
    event.preventDefault();

    axios.delete(`https://jsonplaceholder.typicode.com/users/${this.state.id}`)
      .then(res => {
        console.log(res);
        console.log(res.data);
      })
  }

  render() {
    return (
      <div>
        <form onSubmit={this.handleSubmit}>
          <label>
            Person ID:
            <input type="text" name="id" onChange={this.handleChange} />
          </label>
          <button type="submit">Delete</button>
        </form>
      </div>
    )
  }
}

Again, the res object provides us with information about the request. We can then console.log that information again after the form is submitted.


Step 5 – Using a Base Instance in Axios

In this example, we will see how we can set up a base instance in which we can define a URL and any other configuration elements.

Create a separate file named api.js. Export a new axios instance with these defaults:

src/api.js

import axios from 'axios';

export default axios.create({
  baseURL: `http://jsonplaceholder.typicode.com/`
});

Once the default instance is set up, it can then be used inside of the PersonList component. We import the new instance like this:

src/PersonList.js

import React from 'react';
import axios from 'axios';

import API from '../api';

export default class PersonList extends React.Component {
  handleSubmit = event => {
    event.preventDefault();

    API.delete(`users/${this.state.id}`)
      .then(res => {
        console.log(res);
        console.log(res.data);
      })
  }
}

Because http://jsonplaceholder.typicode.com/ is now the base URL, we no longer need to type out the whole URL each time we want to hit a different endpoint on the API.


Step 6 – Using async and await

In this example, we will see how we can use async and await to work with promises.

The await keyword resolves the promise and returns the value. The value can then be assigned to a variable.

handleSubmit = async event => {
  event.preventDefault();

  //
  const response = await API.delete(`users/${this.state.id}`);

  console.log(response);
  console.log(response.data);
};

In this code sample, the .then() is replaced. The promise is resolved, and the value is stored inside the response variable.

That’s all about in this article.


Conclusion

In this article, We understood how to use Axios with ReactJS. We explored several examples on how to use Axios inside a React application to create HTTP requests and handle responses.

Thanks for reading ! I hope you enjoyed and learned about the Axios library usage in ReactJS. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

You can also follow official website and tutorials of React as below links :

If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.

Thanks again Reading. HAPPY READING!!???

A Short Note – Higher-order Components vs Render Props In React JS

Hello Readers, CoolMonkTechie heartily welcomes you in A Short Note Series (Higher-order Components vs Render Props In React JS).

In this note series, we will about Higher-order components vs Render Props in React JS. Higher-order components (HOC) and render props are two ways to build cross cutting code in React JS. How do we decide to use one over the other?

The reason we have these two approaches is that React used ES6 class for building React components to manage the state. Before that, to share cross-cutting concerns for components React.createClass mixins was the way to handle that. However, class does not support mixins and a new way had to be developed.

So Let’s begin.

Higher-order Components

Soon HOC evolved into the picture to support code reuse. Essentially, these are like the decorator pattern, a function that takes a component as the first parameter and returns a new component. This is where we apply our cross-cutting functionality.

Example of higher-order component:

function withExample(Component) {
  return function(props) {
    // cross cutting logic added here
    return <Component {...props} />;
  };
}

What does HOC solve?

  • Importantly, they provided a way to reuse code when using ES6 classes.
  • No longer have method name clashing if two HOC implement the same one.
  • It is easy to make small reusable units of code, supporting the single responsibility principle.
  • Apply multiple HOCs to one component by composing them. The readability can be improve using a compose function like in Recompose.

We can see similarities in the downsides for both mixins and HOC:

  • There is still an indirection issue, however, not about which HOC is changing the state but which one is providing a certain prop.
  • It is possible two HOC could use the same prop, meaning one would overwrite the other silently.

Higher-order components come with new problems:

  • Boilerplate code like setting the displayName with the HOC function name e.g. (withHOC(Component)) to help with debugging.
  • Ensure all relevant props are passed through to the component.
  • Hoist static methods from the wrapped component.
  • It is easy to compose several HOCs together and then this creates a deeply nested tree, making it difficult to debug.

Render Props

A render prop is where a component’s prop is assigned a function, and this is called in the render method of the component. Calling the function can return a React element or component to render.

Example of using a render prop:

render(){
  <FetchData render={(data) => {
    return <p>{data}</p>
  }} />
}

What do render props solve?

  • Reuse code across components when using ES6 classes.
  • The lowest level of indirection – it’s clear which component is called and the state is isolated.
  • No naming collision issues for props, state and class methods.
  • No need to deal with boiler code and hoisting static methods.

Minor problems:

  • Caution using shouldComponentUpdate as the render prop might close over data it is unaware of.
  • There could also be minor memory issues when defining a closure for every render. But be sure to measure first before making performance changes, as it might not be an issue for our app.
  • Another small annoyance is the render props callback is not so neat in JSX as it needs to be wrapped in an expression. Rendering the result of an HOC looks cleaner.

HOC or Render props

From this, we can say render props solves the issues posed by HOC, and it should be our go-to pattern for creating cross-cutting logic. Render props are easier to set up, with less boiler code and no need to hoist static methods, as they are like standard components. They are also more predictable as fewer things can go wrong with updating state and passing props through.

However, we find HOC better to compose over render props, especially when many cross-cutting concerns are applied to a component. Many nested render prop components will look similar to “callback hell”. It’s straightforward to create small HOC units and compose them together to build a feature-rich component. Recompose is a great example and can be useful to apply to solving our next challenge.

Just remember to use the tool that best helps us solve our problem and don’t let the Hype Driven Development pressure us to do otherwise. Render props and HOC are equally great React patterns.

Conclusion

In this note series, we understood about High-order components vs Render Props in React JS. We also discussed How do we decide to use one over the other in ReactJS.

Thanks for reading! I hope you enjoyed and learned about High-order components vs Render Props concept in React JS. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

You can also follow official website and tutorials of React as below links :

If you have any comments, questions, or think I missed something, leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

iOS – How To Select The Best Method Of Scheduling Background Runtime In iOS ?

Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Select The Best Method Of Scheduling Background Runtime In iOS ?).

In this article, We will understand how to select the best method of scheduling background runtime for our app in iOS. Selecting the right strategies for our app in iOS depends on how it functions in the background.

A famous quote about Learning is :

” Change is the end result of all true learning. “


So Let’s begin.


Overview

If our app needs computing resources to complete tasks when it’s not running in the foreground, we can select from a number of strategies to obtain background runtime. Selecting the right strategies for our app depends on how it functions in the background.

Some apps perform work for a short time while in the foreground and must continue uninterrupted if they go to the background. Other apps defer that work to perform in the background at a later time or even at night while the device charges. And some apps need background processing time at varied and unpredictable times, such as when an external event or message arrives.


Different Methods Of Scheduling Background Runtime

In this section, we select one or more methods for our app based on how you schedule activity in the background.


1. Continue Foreground Work in the Background

The system may place apps in the background at any time. If our app performs critical work that must continue while it runs in the background, use beginBackgroundTask(withName:expirationHandler:) to alert the system. Consider this approach if our app needs to finish sending a message or complete saving a file.

The system grants our app a limited amount of time to perform its work once it enters the background. Don’t exceed this time, and use the expiration handler to cover the case where the time has depleted to cancel or defer the work.

Once our work completes, call endBackgroundTask(_:) before the time limit expires so that our app suspends properly. The system terminates our app if we fail to call this method.

If the task is one that takes some time, such as downloading or uploading files, use URLSession.


2. Defer Intensive Work

To preserve battery life and performance, we can schedule backgrounds tasks for periods of low activity, such as overnight when the device charges. Use this approach when our app manages heavy workloads, such as training machine learning models or performing database maintenance.

Schedule these types of background tasks using BGProcessingTask, and the system decides the best time to launch our background task.


3. Update Our App’s Content

Our app may require short bursts of background time to perform content refresh or other work; for example, our app may fetch content from the server periodically, or regularly update its internal state. In this situation, use BGAppRefreshTask by requesting BGAppRefreshTaskRequest.

The system decides the best time to launch our background task, and provides our app up to 30 seconds of background runtime. Complete our work within this time period and call setTaskCompleted(success:), or the system terminates our app. 


4. Wake Our App with a Background Push

Background pushes silently wake our app in the background. They don’t display an alert, play a sound, or badge our app’s icon. If our app obtains content from a server infrequently or at irregular intervals, use background pushes to notify our app when new content becomes available. A messaging app with a muted conversation might use a background push solution, and so might an email app that process incoming mail without alerting the user.

When sending a background push, set content-available: to 1 without alertsound, or badge. The system decides when to launch the app to download the content. To ensure our app launches, set apns-priority to 5, and apns-push-type to background.

Once the system delivers the remote notification with application(_:didReceiveRemoteNotification:fetchCompletionHandler:), our app has up to 30 seconds to complete its work. One our app performs the work, call the passed completion handler as soon as possible to conserve power. If we send background pushes more frequently than three times per hour, the system imposes rate limitations.


5. Request Background Time and Notify the User

If our app needs to perform a task in the background and show a notification to the user, use a Notification Service Extension. For example, an email app might need to notify a user after downloading a new email. Subclass UNNotificationServiceExtension and bundle the system extension with our app. Upon receiving a push notification, our service extension wakes up and obtains background runtime through didReceive(_:withContentHandler:).

When our extension completes its work, it must call the content handler with the content we want to deliver to the user. Our extension has a limited amount of time to modify the content and execute the contentHandler block.

That’s all about in this article.


Conclusion

In this article, We understood how to select the best method of scheduling background runtime in iOS.

Thanks for reading ! I hope you enjoyed and learned about selecting best method for scheduling background runtime concept in iOS. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

You can also follow other website and tutorials of iOS as below links :

If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

A Short Note – How To Debug HTTPS Problems With CFNetwork Diagnostic Logging In iOS ?

Hello Readers, CoolMonkTechie heartily welcomes you in A Short Note Series (How To Debug HTTPS Problems With CFNetwork Diagnostic Logging In iOS ?).

In this note series, we will understand how to use CFNetwork diagnostic logging to investigate HTTP and HTTPS problems in iOS.


So Let’s begin.


Overview

If we’re using URLSession and need to debug a complex networking issue, we can enable CFNetwork diagnostic logging to get detailed information about the progress of our network requests. CFNetwork diagnostic logging has unique advantages relative to other network debugging tools, including:

  • Minimal setup
  • The ability to look at network traffic that’s protected by Transport Layer Security (TLS).
  • Information about CFNetwork’s internal state, like which cookies get saved and applied.

CFNetwork diagnostic logging is not exclusive to the CFNetwork framework. The core implementation of the URLSession API lives within the CFNetwork framework, and thus we can and should use CFNetwork diagnostic logging if we’re using URLSession.


Understand the Security Implications

CFNetwork diagnostic logs may contain decrypted TLS data and other security-sensitive information. Take these precautions:

  • Restrict access to any logs we capture.
  • If we build an app that enables this logging programmatically, make sure that anyone who receives that app understands the security implications of using it.
  • If we send a log to Apple, redact any security-sensitive information.

CFNetwork diagnostic logs may contain information that is extremely security-sensitive. Protect these logs accordingly.


Enable Logging In Xcode

To enable CFNetwork diagnostic logging, edit the current scheme (choose Product > Scheme > Edit Scheme), navigate to the Arguments tab, and add a CFNETWORK_DIAGNOSTICS item to the Environment Variables list. The value of this item can range from 0 to 3, where 0 turns logging off, and higher numbers give us progressively more logging. When we next run our app and use URLSession, CFNetwork diagnostic log entries appear in Xcode’s debug console area. If the console area isn’t visible, choose View > Debug Area > Show Debug Area to show it.


Enable Logging Programmatically

To investigate problems outside of Xcode, programmatically enable CFNetwork diagnostic logging by setting the environment variable directly.

setenv("CFNETWORK_DIAGNOSTICS", "3", 1)

Do this right at the beginning of the app’s launch sequence:

  • If we’re programming in Objective-C, put the code at the start of our main function.
  • If our program has a C++ component, make sure this code runs before any C++ static initializers that use CFNetwork or any APIs, like URLSession, that use CFNetwork.
  • If we’re programming in Swift, put this code in main.swift. By default, Swift apps don’t have a main.swift. We need to add one.


View Log Entries

How we view the resulting log entries depends on our specific situation:

  • In macOS, if we can reproduce the problem locally, run the Console utility on our Mac and view log entries there.
  • In iOS, if we can reproduce the problem locally, and we’re able to connect the device to our Mac through USB, run the Console utility on our Mac and view log entries there. Make sure we select our iOS device from the source list on the left of the main Console window (choose View > Show Sources if the source list is not visible).
  • If neither of the above work for us — for example, if we’re trying to debug a problem that can only be reproduced by one of our users in the field — get a sys-diagnose log from the machine exhibiting the problem and then extract the log entries from that.


Conclusion

In this note series, we understood How to use CFNetwork diagnostic logging to investigate HTTP and HTTPS problems in iOS.

Thanks for reading! I hope you enjoyed and learned about CFNetwork diagnostic logging usage in iOS. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

You can also follow other website and tutorials of iOS as below links :

If you have any comments, questions, or think I missed something, leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

A Short Note – How To Debug HTTP Server-Side Errors In iOS ?

Hello Readers, CoolMonkTechie heartily welcomes you in A Short Note Series (How To Debug HTTP Server-Side Errors In iOS ?).

In this note series, we will understand HTTP server-side errors and how to debug them in iOS.


So Let’s begin.


Overview

Apple’s HTTP APIs report transport errors and server-side errors:

  • A transport error is caused by a problem getting our request to, or getting the response from, the server. These are represented by a NSError value, typically passed to our completion handler block or to a delegate method like urlSession(_:task:didCompleteWithError:). If we get a transport error, investigate what’s happening with our network traffic.
  • A server-side error is caused by problems detected by the server. Such errors are represented by the statusCode property of the HTTPURLResponse.

The status codes returned by the server aren’t always easy to interpret. Many HTTP server-side errors don’t give us a way to determine, from the client side, what went wrong. These include the 5xx errors (like 500 Internal Server Error) and many 4xx errors (for example, with 400 Bad Request, it’s hard to know exactly why the server considers the request bad).


Print the HTTP Response Body

In this section, we explain how to debug these server-side problems.

Sometimes, the error response from the server includes an HTTP response body that explains what the problem is. Look at the HTTP response body to see whether such an explanation is present. If it is, that’s the easiest way to figure out what went wrong. For example, consider this standard URLSession request code.

URLSession.shared.dataTask(with: url) { (responseBody, response, error) in
    if let error = error {
        // handle transport error
    }
    let response = response as! HTTPURLResponse
    let responseBody = responseBody!
    if !(200...299).contains(response.statusCode) {
        // handle HTTP server-side error
    }
    // handle success
    print("success")
}.resume()

A server-side error runs the line labeled handle HTTP server-side error. To see if the server’s response contains any helpful hints what went wrong, add some code that prints the HTTP response body.

        // handle HTTP server-side error
        if let responseString = String(bytes: responseBody, encoding: .utf8) {
            // The response body seems to be a valid UTF-8 string, so      print that.
            print(responseString)
        } else {
            // Otherwise print a hex dump of the body.
            print(responseBody as NSData)
        }


Compare Against a Working Client

If the HTTP response body doesn’t help, compare our request to a request issued by a working client. For example, the server might not fail if we send it the same request from:

  • A web browser, like Safari
  • A command-line tool, like curl
  • An app running on a different platform

If we have a working client, it’s relatively straightforward to debug our problem:

  1. Use the same network debugging tool to record the requests made by our client and the working client. If we’re using HTTP (not HTTPS), use a low-level packet trace tool to record these requests. If we’re using HTTPS, with Transport Layer Security (TLS), we can’t see the HTTP request. In that case, if our server has a debugging mode that lets us see the plaintext request, look there. If not, a debugging HTTP proxy may let us see the request.
  2. Compare the two requests. Focus on the most significant values first.
    • Do the URL paths or the HTTP methods match?
    • Do the Content-Type headers match?
    • What about the remaining headers?
    • Do the request bodies match?
    • If these all match and things still don’t work, we may need to look at more obscure values, like the HTTP transfer encoding and, if we’re using HTTPS, various TLS parameters.
  3. Address any discrepancies.
  4. Retry with our updated client.
  5. If things still fail, go back to step 1.


Debug on the Server

If we don’t have access to a working client, or we can’t get things to work using the steps described in the previous section, our only remaining option is to debug the problem on the server. Ideally, the server will have documented debugging options that offer more insight into the failure. If not, escalate the problem through the support channel associated with our server software.


Conclusion

In this note series, we understood HTTP server-side errors and how to debug them in iOS.

Thanks for reading! I hope you enjoyed and learned about HTTP Server-side Concept in iOS. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

You can also follow other website and tutorials of iOS as below links :

If you have any comments, questions, or think I missed something, leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

A Short Note – Choosing Network Debugging Tool In iOS

Hello Readers, CoolMonkTechie heartily welcomes you in A Short Note Series (Choosing Network Debugging Tool In iOS).

In this note series, we will understand which tool works best for our network debugging problem in iOS.

So Let’s begin.

Overview

Debugging network problems is challenging because of the fundamental nature of networking. Networking is asynchronous, time-sensitive, and error prone. The two programs involved (the client and the server, say) are often created by different developers, who disagree on the exact format of the data being exchanged. Fortunately, a variety of tools can help us debug such problems.

A key goal of these tools is to divide the problem in two. For example, if we’re working on a network client that sends a request to a server and then gets an error back from that server, it’s important to know whether things failed because the request was incorrect (a problem with our client) or because the server is misbehaving. We can use these network debugging tools to view the traffic going over the network, and thus independently check the validity of that traffic.

Choosing Best Network Debugging Tool

The best tool to use depends on the APIs we’re using and the problems we’ve encountered:

  • We may find that our request makes it to the server and then the server sends us a response showing that it failed If we are working at the HTTP level (for example, we get an HTTP response with a status code of 500 Internal Server Error). 
  • If we’re using URLSession, or one subsystem that uses URLSession internally, we can enable CFNetwork diagnostic logging to get a detailed view of how our requests were processed.
  • We need a packet trace if we want a low-level view of the traffic exchanged over the network,.
  • If we’re working in Safari or one of the various web views (like WKWebView), we can use the Web Inspector to view the network requests issued by the page. 
  • Some of the most popular network debugging tools, like HTTP debugging proxies, are third-party products.

Conclusion

In this note series, we understood which tool works best for your network debugging problem in iOS.

Thanks for reading! I hope you enjoyed and learned about Choosing Best Network Tool Concept in iOS. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe to the blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

You can also follow other website and tutorials of iOS as below links :

If you have any comments, questions, or think I missed something, leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

Android – How To Use Sensors In Android ?

Hello Readers, CoolMonkTechie heartily welcomes you in this article (How To Use Sensors In Android ?).

In this article, We will learn how to use Android Sensors. We all must have played some Android games that includes the supports of sensors i.e. by tilting the phone some actions might happen in the game. For example, in the Temple Run game, by tilting the phone to left or right, the position of the runner changes. So, all these games are using the sensors present in your Android device. Other examples can be shaking our phone to lock the screen, finding the direction with the help of a compass, etc. All these are examples of Android sensors.

Use sensors on the device to add rich location and motion capabilities to our app, from GPS or network location to accelerometer, gyroscope, temperature, barometer, and more.

To understand the Android Sensors, we will discuss the below topics :

  • Overview
  • Sensor Coordinate System
  • Categories of Sensors
  • Android Sensor Framework
  • Perform Tasks To Use Sensor-Related APIs
  • Handling Different Sensor Configurations
  • Best Practices for Accessing and Using Sensors

A famous quote about learning is :

“The more that you read, the more things you will know. The more that you learn, the more places you’ll go.”

So Let’s begin.


Overview

In Android devices, there are various built-in sensors that can be used to measure the orientation, motions, and various other kinds of environmental conditions. In general, there are two types of sensors in Android devices:

  1. Hardware Sensors: Hardware sensors are physical components that are present in Android devices. They can directly measure various properties like field strength, acceleration, etc according to the types of the sensors and after measuring the environment properties they can send the data to Software Sensors.
  2. Software Sensors: Software sensors also know as virtual sensors are those sensors that take the help of one or more Hardware sensors and based on the data collected by various Hardware sensors, they can derive some result.

It is not necessary that all Android devices must have all the sensors. Some devices may have all sensors and some may lack one or two of them. At the same time, a particular device may have more than one sensors of the same type but with different configurations and capabilities.


Sensor Coordinate System

To express data values or to collect data, the sensors in Android devices uses a 3-axis coordinate system i.e. we will be having X, Y, and Z-axis. The following figure depicts the position of various axis used in sensors.

In default orientation, the horizontal axis is represented by X-axis, the vertical axis is represented by Y-axis and the Z-axis points towards the outside of the screen face i.e towards the user. This coordinate system is used by the following sensors:

  • Acceleration sensor
  • Gravity sensor
  • Gyroscope
  • Linear acceleration sensor
  • Geomagnetic field sensor

The most important point to understand about this coordinate system is that the axes are not swapped when the device’s screen orientation changes—that is, the sensor’s coordinate system never changes as the device moves. This behavior is the same as the behavior of the OpenGL coordinate system.

Another point to understand is that our application must not assume that a device’s natural (default) orientation is portrait. The natural orientation for many tablet devices is landscape. And the sensor coordinate system is always based on the natural orientation of a device.

Finally, if our application matches sensor data to the on-screen display, we need to use the getRotation() method to determine screen rotation, and then use the remapCoordinateSystem() method to map sensor coordinates to screen coordinates. We need to do this even if our manifest specifies portrait-only display.


Categories of Sensors

Following are the three broad categories of sensors in Android:

  1. Motion Sensors: The sensors that are responsible for measuring or identifying the shakes and tilts of your Android devices are called Motion sensors. These sensors measure the rotational forces along the three-axis. Gravity sensors, accelerometers, etc are some of the examples of Motion sensors.
  2. Position Sensors: As the name suggests, the Position sensors are used to determine the position of an Android device. Magnetometers, Proximity sensors are some of the examples of Position sensors.
  3. Environmental Sensors: Environmental properties like temperature, pressure, humidity, etc are identified with the help of Environmental sensors. Some of the examples of Environmental sensors are thermometer, photometer, barometer, etc.


Android Sensor Framework

Everything related to sensors in Android device is managed or controlled by Android Sensor Framework. By using Android Sensor Framework we can collect raw sensor data. It is a part of android.hardware package and includes various classes and interface:

  1. SensorManager: This is used to get access to various sensors present in the device to use it according to need.
  2. Sensor: This class is used to create an instance of a specific sensor.
  3. SensorEvent: This class is used to find the details of the sensor events.
  4. SensorEventListener: This interface can be used to trigger or perform some action when there is a change in the sensor values.

Following are the usages of the Android Sensor Framework:

  1. You can register or unregister sensor events.
  2. You can collect data from various sensors.
  3. You can find the sensors that are active on a device and determine its capabilities.


Perform Tasks To Use Sensor-Related APIs

In this section, we will see how we can identify various sensors present in a device and how to determine its capabilities. In a typical application we use these sensor-related APIs to perform two basic tasks:

  • Identifying sensors and sensor capabilities
  • Monitoring Sensor Events


Identifying sensors and sensor capabilities

Identifying sensors and sensor capabilities at runtime is useful if our application has features that rely on specific sensor types or capabilities. For example, we may want to identify all of the sensors that are present on a device and disable any application features that rely on sensors that are not present. Likewise, we may want to identify all of the sensors of a given type so we can choose the sensor implementation that has the optimum performance for our application.

It is not necessary that two Android devices must have the same number of sensors or the same type of sensors. The availability of sensors varies from device to device and from one Android version to other. So, we can not guarantee that two Android versions or two Android devices must have the same sensors. It becomes a necessary task to identify which sensors are present in a particular Android device.

As seen earlier, we can take the help of the Android Sensor Framework to find the sensors that are present in a particular Android device. Not only that, with the help of various methods of the sensor framework, we can determine the capabilities of a sensor like its resolution, its maximum range, and its power requirements.

Following are the steps that need to be followed to get the list of available sensors in a device:

  1. Create an instance of the SensorManager.
  2. Call the getSystemService() method and pass SENSOR_SERVICE as an argument. This SENSOR_SERVICE is used to retrieve a SensorManager to access sensors.
  3. Call the getSensorList() method to get the names of all the sensors present in the device. The parameter of this method is sensor type. Either we can use TYPE_ALL to get all the sensors available in the device or you can use a particular sensor, for example, TYPE_GRAVITY or TYPE_GYROSCOPE to get the list of sensors of that type only(we can have more than one sensors of the same type).
  4. If we are not using TYPE_ALL i.e. we want to get all the types of sensors of a particular type then we can do so by using the getDefaultSensor() method. This method returns null if there is no sensor of that type in the Android device.
//Step 1
private lateinit var sensorManager: SensorManager

//Step 2
sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager

//Step 3
//To get a list of all sensors, use TYPE_ALL
val deviceSensors: List<Sensor> = sensorManager.getSensorList(Sensor.TYPE_ALL)
//Or you can use TYPE_GRAVITY, TYPE_GYROSCOPE or some other sensor
//val deviceSensors: List<Sensor> = sensorManager.getSensorList(Sensor.TYPE_GRAVITY)

//Step 4
if (sensorManager.getDefaultSensor(Sensor.TYPE_GRAVITY) != null) {
    //There's a gravity sensor.
} else {
    //No gravity sensor.
}

Apart from finding the list of available sensors, we can also check the capability of a particular sensor i.e. we can check the resolution, power, range, etc of a particular sensor.

Sensor.getResolution() //returns a float value which is the resolution of the sensor

Sensor.getMaximumRange() //returns a float value which is the maximum range of the sensor

Sensor.getPower() //returns a float value which is the power in mA used by sensor


Monitoring Sensor Events

Monitoring sensor events is how we acquire raw sensor data. A sensor event occurs every time a sensor detects a change in the parameters it is measuring. A sensor event provides us with four pieces of information: the name of the sensor that triggered the event, the timestamp for the event, the accuracy of the event, and the raw sensor data that triggered the event.

To monitor raw sensor data we need to implement two callback methods that are exposed through the SensorEventListener interface: onAccuracyChanged() and onSensorChanged(). The Android system calls these methods whenever the following occurs:

  1. onAccuracyChanged(): This is called when there is a change in the accuracy of measurement of the sensor. This method will provide the Sensor object that has changed and the new accuracy. There are four statuses of accuracy i.e. SENSOR_STATUS_ACCURACY_LOW, SENSOR_STATUS_ACCURACY_MEDIUM, SENSOR_STATUS_ACCURACY_HIGH, SENSOR_STATUS_UNRELIABLE.
  2. onSensorChanged(): This is called when there is an availability of new sensor data. This method will provide us with a SensorEvent object that contains new sensor data.
class SensorActivity : Activity(), SensorEventListener {
    private lateinit var sensorManager: SensorManager
    private lateinit var mGravity: Sensor

    public override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager

        //gravity sensor
        mGravity = sensorManager.getDefaultSensor(Sensor.TYPE_GRAVITY)
    }

    override fun onAccuracyChanged(sensor: Sensor, accuracy: Int) {
        //If sensor accuracy changes.
    }

    override fun onSensorChanged(event: SensorEvent) {
        //If there is a new sensor data
    }

    //register
    override fun onResume() {
        super.onResume()
        mGravity?.also { gravity ->
            sensorManager.registerListener(this, gravity, SensorManager.SENSOR_DELAY_NORMAL)
        }
    }

    //unregister
    override fun onPause() {
        super.onPause()
        sensorManager.unregisterListener(this)
    }
}

In this example, the default data delay (SENSOR_DELAY_NORMAL) is specified when the registerListener() method is invoked. The data delay (or sampling rate) controls the interval at which sensor events are sent to our application via the onSensorChanged() callback method. The default data delay is suitable for monitoring typical screen orientation changes and uses a delay of 200,000 microseconds. We can specify other data delays, such as SENSOR_DELAY_GAME (20,000 microsecond delay), SENSOR_DELAY_UI (60,000 microsecond delay), or SENSOR_DELAY_FASTEST (0 microsecond delay). As of Android 3.0 (API Level 11) we can also specify the delay as an absolute value (in microseconds).

The delay that we specify is only a suggested delay. The Android system and other applications can alter this delay. As a best practice, we should specify the largest delay that we can because the system typically uses a smaller delay than the one we specify (that is, we should choose the slowest sampling rate that still meets the needs of our application). Using a larger delay imposes a lower load on the processor and therefore uses less power.

There is no public method for determining the rate at which the sensor framework is sending sensor events to our application; however, we can use the timestamps that are associated with each sensor event to calculate the sampling rate over several events. We should not have to change the sampling rate (delay) once we set it. If for some reason we do need to change the delay, we will have to unregister and reregister the sensor listener.

It’s also important to note that this example uses the onResume() and onPause() callback methods to register and unregister the sensor event listener. As a best practice we should always disable sensors we don’t need, especially when our activity is paused. Failing to do so can drain the battery in just a few hours because some sensors have substantial power requirements and can use up battery power quickly. The system will not disable sensors automatically when the screen turns off.


Handling Different Sensor Configurations

Android does not specify a standard sensor configuration for devices, which means device manufacturers can incorporate any sensor configuration that they want into their Android-powered devices. As a result, devices can include a variety of sensors in a wide range of configurations. If our application relies on a specific type of sensor, we have to ensure that the sensor is present on a device so our app can run successfully.

We have two options for ensuring that a given sensor is present on a device:

  • Detect sensors at runtime and enable or disable application features as appropriate.
  • Use Google Play filters to target devices with specific sensor configurations.


Detecting sensors at runtime

If our application uses a specific type of sensor, but doesn’t rely on it, we can use the sensor framework to detect the sensor at runtime and then disable or enable application features as appropriate. For example, a navigation application might use the temperature sensor, pressure sensor, GPS sensor, and geomagnetic field sensor to display the temperature, barometric pressure, location, and compass bearing. If a device doesn’t have a pressure sensor, we can use the sensor framework to detect the absence of the pressure sensor at runtime and then disable the portion of our application’s UI that displays pressure. For example, the following code checks whether there’s a pressure sensor on a device:

private lateinit var sensorManager: SensorManager
...
sensorManager = getSystemService(Context.SENSOR_SERVICE) as SensorManager

if (sensorManager.getDefaultSensor(Sensor.TYPE_PRESSURE) != null) {
    // Success! There's a pressure sensor.
} else {
    // Failure! No pressure sensor.
}


Using Google Play filters to target specific sensor configurations

If we are publishing our application on Google Play we can use the <uses-feature> element in our manifest file to filter our application from devices that do not have the appropriate sensor configuration for our application. The <uses-feature> element has several hardware descriptors that let we filter applications based on the presence of specific sensors. The sensors we can list include: accelerometer, barometer, compass (geomagnetic field), gyroscope, light, and proximity. The following is an example manifest entry that filters apps that do not have an accelerometer:

<uses-feature android:name="android.hardware.sensor.accelerometer"
              android:required="true" />

If we add this element and descriptor to our application’s manifest, users will see our application on Google Play only if their device has an accelerometer.

We should set the descriptor to android:required="true" only if our application relies entirely on a specific sensor. If our application uses a sensor for some functionality, but still runs without the sensor, we should list the sensor in the <uses-feature> element, but set the descriptor to android:required="false". This helps ensure that devices can install our app even if they do not have that particular sensor. This is also a project management best practice that helps us keep track of the features our application uses. Keep in mind, if our application uses a particular sensor, but still runs without the sensor, then we should detect the sensor at runtime and disable or enable application features as appropriate.


Best Practices for Accessing and Using Sensors

As we design our sensor implementation, be sure to follow the guidelines that are discussed in this section. These guidelines are recommended best practices for anyone who is using the sensor framework to access sensors and acquire sensor data.


1. Only gather sensor data in the foreground

On devices running Android 9 (API level 28) or higher, apps running in the background have the following restrictions:

  • Sensors that use the continuous reporting mode, such as accelerometers and gyroscopes, don’t receive events.
  • Sensors that use the on-change or one-shot reporting modes don’t receive events.

Given these restrictions, it’s best to detect sensor events either when your app is in the foreground or as part of a foreground service.


2. Unregister sensor listeners

Be sure to unregister a sensor’s listener when we are done using the sensor or when the sensor activity pauses. If a sensor listener is registered and its activity is paused, the sensor will continue to acquire data and use battery resources unless we unregister the sensor. The following code shows how to use the onPause() method to unregister a listener:

private lateinit var sensorManager: SensorManager
...
override fun onPause() {
    super.onPause()
    sensorManager.unregisterListener(this)
}


3. Test with the Android Emulator

The Android Emulator includes a set of virtual sensor controls that allow you to test sensors such as accelerometer, ambient temperature, magnetometer, proximity, light, and more.

The emulator uses a connection with an Android device that is running the SdkControllerSensor app. Note that this app is available only on devices running Android 4.0 (API level 14) or higher. (If the device is running Android 4.0, it must have Revision 2 installed.) The SdkControllerSensor app monitors changes in the sensors on the device and transmits them to the emulator. The emulator is then transformed based on the new values that it receives from the sensors on our device.


4. Don’t block the onSensorChanged() method

Sensor data can change at a high rate, which means the system may call the onSensorChanged(SensorEvent) method quite often. As a best practice, we should do as little as possible within the onSensorChanged(SensorEvent) method so we don’t block it. If our application requires us to do any data filtering or reduction of sensor data, we should perform that work outside of the onSensorChanged(SensorEvent) method.


5. Avoid using deprecated methods or sensor types

Several methods and constants have been deprecated. In particular, the TYPE_ORIENTATION sensor type has been deprecated. To get orientation data we should use the getOrientation() method instead. Likewise, the TYPE_TEMPERATURE sensor type has been deprecated. We should use the TYPE_AMBIENT_TEMPERATURE sensor type instead on devices that are running Android 4.0.


6. Verify sensors before we use them

Always verify that a sensor exists on a device before we attempt to acquire data from it. Don’t assume that a sensor exists simply because it’s a frequently-used sensor. Device manufacturers are not required to provide any particular sensors in their devices.


7. Choose sensor delays carefully

When we register a sensor with the registerListener() method, be sure we choose a delivery rate that is suitable for our application or use-case. Sensors can provide data at very high rates. Allowing the system to send extra data that we don’t need wastes system resources and uses battery power.

That’s all about in this article.


Conclusion

In this article, we learned about how to use Android Sensors. We learned about the Hardware and the Software sensors. We saw how the Android Sensor Framework can be used to determine the sensors present in the Android device. At last, we saw how to use Sensor Event Listener.

Thanks for reading ! I hope you enjoyed and learned about Sensors Concept in Android. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find other articles of CoolMonkTechie as below link :

You can also follow the official website and tutorials of Android as below links :

If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

Android – Understanding App Architecture

Hello Readers, CoolMonkTechie heartily welcomes you in this article (Understanding App Architecture in Android).

In this article, We will learn about App Architecture in Android. We will also discuss about best practices and recommended architecture for building robust, production-quality apps.

A famous quote about learning is :

” The beautiful thing about learning is that nobody can take it away from you.”

So Let’s begin.


Mobile App User Experiences

In the majority of cases, desktop apps have a single entry point from a desktop or program launcher, then run as a single, monolithic process. Android apps, on the other hand, have a much more complex structure. A typical Android app contains multiple app components, including activities, fragments, services, content providers, and broadcast receivers.

We declare most of these app components in our app manifest. The Android OS then uses this file to decide how to integrate our app into the device’s overall user experience. Given that a properly-written Android app contains multiple components and that users often interact with multiple apps in a short period of time, apps need to adapt to different kinds of user-driven workflows and tasks.

For example, consider what happens when we share a photo in our favorite social networking app:

  1. The app triggers a camera intent. The Android OS then launches a camera app to handle the request. At this point, the user has left the social networking app, but their experience is still seamless.
  2. The camera app might trigger other intents, like launching the file chooser, which may launch yet another app.
  3. Eventually, the user returns to the social networking app and shares the photo.

At any point during the process, the user could be interrupted by a phone call or notification. After acting upon this interruption, the user expects to be able to return to, and resume, this photo-sharing process. This app-hopping behavior is common on mobile devices, so our app must handle these flows correctly.

Keep in mind that mobile devices are also resource-constrained, so at any time, the operating system might kill some app processes to make room for new ones.

Given the conditions of this environment, it’s possible for our app components to be launched individually and out-of-order, and the operating system or user can destroy them at any time. Because these events aren’t under our control, we shouldn’t store any app data or state in our app components, and our app components shouldn’t depend on each other.


Common Architectural Principles

If we shouldn’t use app components to store app data and state, how should we design our app?


Separation of concerns

The most important principle to follow is separation of concerns. It’s a common mistake to write all our code in an Activity or a Fragment. These UI-based classes should only contain logic that handles UI and operating system interactions. By keeping these classes as lean as possible, we can avoid many lifecycle-related problems.

Keep in mind that we don’t own implementations of Activity and Fragment; rather, these are just glue classes that represent the contract between the Android OS and our app. The OS can destroy them at any time based on user interactions or because of system conditions like low memory. To provide a satisfactory user experience and a more manageable app maintenance experience, it’s best to minimize our dependency on them.


Drive UI from a model

Another important principle is that you should drive your UI from a model, preferably a persistent model. Models are components that are responsible for handling the data for an app. They’re independent from the View objects and app components in our app, so they’re unaffected by the app’s lifecycle and the associated concerns.

Persistence is ideal for the following reasons:

  • Our users don’t lose data if the Android OS destroys our app to free up resources.
  • Our app continues to work in cases when a network connection is flaky or not available.

By basing our app on model classes with the well-defined responsibility of managing the data, our app is more testable and consistent.


Recommended App Architecture

In this section, we demonstrate how to structure an app using Architecture Components by working through an end-to-end use case.

Imagine we’re building a UI that shows a user profile. We use a private backend and a REST API to fetch the data for a given profile.


Overview

To start, consider the following diagram, which shows how all the modules should interact with one another after designing the app:

Notice that each component depends only on the component one level below it. For example, activities and fragments depend only on a view model. The repository is the only class that depends on multiple other classes; in this example, the repository depends on a persistent data model and a remote backend data source.

This design creates a consistent and pleasant user experience. Regardless of whether the user comes back to the app several minutes after they’ve last closed it or several days later, they instantly see a user’s information that the app persists locally. If this data is stale, the app’s repository module starts updating the data in the background.


Build The User Interface

The UI consists of a fragment, UserProfileFragment, and its corresponding layout file, user_profile_layout.xml.

To drive the UI, our data model needs to hold the following data elements:

  • User ID: The identifier for the user. It’s best to pass this information into the fragment using the fragment arguments. If the Android OS destroys our process, this information is preserved, so the ID is available the next time our app is restarted.
  • User object: A data class that holds details about the user.

We use a UserProfileViewModel, based on the ViewModel architecture component, to keep this information.

ViewModel object provides the data for a specific UI component, such as a fragment or activity, and contains data-handling business logic to communicate with the model. For example, the ViewModel can call other components to load the data, and it can forward user requests to modify the data. The ViewModel doesn’t know about UI components, so it isn’t affected by configuration changes, such as recreating an activity when rotating the device.

We’ve now defined the following files:

  • user_profile.xml: The UI layout definition for the screen.
  • UserProfileFragment: The UI controller that displays the data.
  • UserProfileViewModel: The class that prepares the data for viewing in the UserProfileFragment and reacts to user interactions.

The following code snippets show the starting contents for these files. (The layout file is omitted for simplicity.)

UserProfileViewModel

class UserProfileViewModel : ViewModel() {
   val userId : String = TODO()
   val user : User = TODO()
}

UserProfileFragment

class UserProfileFragment : Fragment() {
   // To use the viewModels() extension function, include
   // "androidx.fragment:fragment-ktx:latest-version" in your app
   // module's build.gradle file.
   private val viewModel: UserProfileViewModel by viewModels()

   override fun onCreateView(
       inflater: LayoutInflater, container: ViewGroup?,
       savedInstanceState: Bundle?
   ): View {
       return inflater.inflate(R.layout.main_fragment, container, false)
   }
}

Now that we have these code modules, how do we connect them? After all, when the user field is set in the UserProfileViewModel class, we need a way to inform the UI.

To obtain the user, our ViewModel needs to access the Fragment arguments. We can either pass them from the Fragment, or better, using the SavedState module, we can make our ViewModel read the argument directly:

// UserProfileViewModel
class UserProfileViewModel(
   savedStateHandle: SavedStateHandle
) : ViewModel() {
   val userId : String = savedStateHandle["uid"] ?:
          throw IllegalArgumentException("missing user id")
   val user : User = TODO()
}

// UserProfileFragment
private val viewModel: UserProfileViewModel by viewModels(
   factoryProducer = { SavedStateVMFactory(this) }
   ...
)

Here, SavedStateHandle allows ViewModel to access the saved state and arguments of the associated Fragment or Activity.

Now we need to inform our Fragment when the user object is obtained. This is where the LiveData architecture component comes in.

LiveData is an observable data holder. Other components in our app can monitor changes to objects using this holder without creating explicit and rigid dependency paths between them. The LiveData component also respects the lifecycle state of our app’s components—such as activities, fragments, and services—and includes cleanup logic to prevent object leaking and excessive memory consumption.

If we’re already using a library like RxJava, we can continue using them instead of LiveData. When we use libraries and approaches like these, however, make sure we handle our app’s lifecycle properly. In particular, make sure to pause our data streams when the related LifecycleOwner is stopped and to destroy these streams when the related LifecycleOwner is destroyed. We can also add the android.arch.lifecycle:reactivestreams artifact to use LiveData with another reactive streams library, such as RxJava2.

To incorporate the LiveData component into our app, we change the field type in the UserProfileViewModel to LiveData<User>. Now, the UserProfileFragment is informed when the data is updated. Furthermore, because this LiveData field is lifecycle aware, it automatically cleans up references after they’re no longer needed.

UserProfileViewModel

class UserProfileViewModel(
   savedStateHandle: SavedStateHandle
) : ViewModel() {
   val userId : String = savedStateHandle["uid"] ?:
          throw IllegalArgumentException("missing user id")
   val user : LiveData<User> = TODO()
}

Now we modify UserProfileFragment to observe the data and update the UI:

UserProfileFragment

override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
   super.onViewCreated(view, savedInstanceState)
   viewModel.user.observe(viewLifecycleOwner) {
       // update UI
   }
}

Every time the user profile data is updated, the onChanged() callback is invoked, and the UI is refreshed.

If we’re familiar with other libraries where observable callbacks are used, we might have realized that we didn’t override the fragment’s onStop() method to stop observing the data. This step isn’t necessary with LiveData because it’s lifecycle aware, which means it doesn’t invoke the onChanged() callback unless the fragment is in an active state; that is, it has received onStart() but hasn’t yet received onStop()). LiveData also automatically removes the observer when the fragment’s onDestroy() method is called.

We also didn’t add any logic to handle configuration changes, such as the user rotating the device’s screen. The UserProfileViewModel is automatically restored when the configuration changes, so as soon as the new fragment is created, it receives the same instance of ViewModel, and the callback is invoked immediately using the current data. Given that ViewModel objects are intended to outlast the corresponding View objects that they update, we shouldn’t include direct references to View objects within your implementation of ViewModel.


Fetch Data

Now that we’ve used LiveData to connect the UserProfileViewModel to the UserProfileFragment, how can we fetch the user profile data?

For this example, we assume that our backend provides a REST API. We use the Retrofit library to access our backend, though we are free to use a different library that serves the same purpose.

Here’s our definition of Webservice that communicates with our backend:

Webservice

interface Webservice {
   /**
    * @GET declares an HTTP GET request
    * @Path("user") annotation on the userId parameter marks it as a
    * replacement for the {user} placeholder in the @GET path
    */
   @GET("/users/{user}")
   fun getUser(@Path("user") userId: String): Call<User>
}

A first idea for implementing the ViewModel might involve directly calling the Webservice to fetch the data and assign this data to our LiveData object. This design works, but by using it, our app becomes more and more difficult to maintain as it grows. It gives too much responsibility to the UserProfileViewModel class, which violates the separation of concerns principle. Additionally, the scope of a ViewModel is tied to an Activity or Fragment lifecycle, which means that the data from the Webservice is lost when the associated UI object’s lifecycle ends. This behavior creates an undesirable user experience.

Instead, our ViewModel delegates the data-fetching process to a new module, a repository.

Repository modules handle data operations. They provide a clean API so that the rest of the app can retrieve this data easily. They know where to get the data from and what API calls to make when data is updated. We can consider repositories to be mediators between different data sources, such as persistent models, web services, and caches.

Our UserRepository class, shown in the following code snippet, uses an instance of WebService to fetch a user’s data:

UserRepository

class UserRepository {
   private val webservice: Webservice = TODO()
   // ...
   fun getUser(userId: String): LiveData<User> {
       // This isn't an optimal implementation. We'll fix it later.
       val data = MutableLiveData<User>()
       webservice.getUser(userId).enqueue(object : Callback<User> {
           override fun onResponse(call: Call<User>, response: Response<User>) {
               data.value = response.body()
           }
           // Error case is left out for brevity.
           override fun onFailure(call: Call<User>, t: Throwable) {
               TODO()
           }
       })
       return data
   }
}

Even though the repository module looks unnecessary, it serves an important purpose: it abstracts the data sources from the rest of the app. Now, our UserProfileViewModel doesn’t know how the data is fetched, so we can provide the view model with data obtained from several different data-fetching implementations.

Manage dependencies between components

The UserRepository class above needs an instance of Webservice to fetch the user’s data. It could simply create the instance, but to do that, it also needs to know the dependencies of the Webservice class. Additionally, UserRepository is probably not the only class that needs a Webservice. This situation requires us to duplicate code, as each class that needs a reference to Webservice needs to know how to construct it and its dependencies. If each class creates a new WebService, our app could become very resource heavy.

We can use the following design patterns to address this problem:

  • Dependency injection (DI): Dependency injection allows classes to define their dependencies without constructing them. At runtime, another class is responsible for providing these dependencies. We recommend the Dagger 2 library for implementing dependency injection in Android apps. Dagger 2 automatically constructs objects by walking the dependency tree, and it provides compile-time guarantees on dependencies.
  • Service locator: The service locator pattern provides a registry where classes can obtain their dependencies instead of constructing them.

It’s easier to implement a service registry than use DI, so if we aren’t familiar with DI, use the service locator pattern instead.

These patterns allow us to scale our code because they provide clear patterns for managing dependencies without duplicating code or adding complexity. Furthermore, these patterns allow us to quickly switch between test and production data-fetching implementations.


Connect ViewModel and the repository

Now, we modify our UserProfileViewModel to use the UserRepository object:

UserProfileViewModel

class UserProfileViewModel @Inject constructor(
   savedStateHandle: SavedStateHandle,
   userRepository: UserRepository
) : ViewModel() {
   val userId : String = savedStateHandle["uid"] ?:
          throw IllegalArgumentException("missing user id")
   val user : LiveData<User> = userRepository.getUser(userId)
}


Cache Data

The UserRepository implementation abstracts the call to the Webservice object, but because it relies on only one data source, it’s not very flexible.

The key problem with the UserRepository implementation is that after it fetches data from our backend, it doesn’t store that data anywhere. Therefore, if the user leaves the UserProfileFragment, then returns to it, our app must re-fetch the data, even if it hasn’t changed.

This design is suboptimal for the following reasons:

  • It wastes valuable network bandwidth.
  • It forces the user to wait for the new query to complete.

To address these shortcomings, we add a new data source to our UserRepository, which caches the User objects in memory:

UserRepository

// Informs Dagger that this class should be constructed only once.
@Singleton
class UserRepository @Inject constructor(
   private val webservice: Webservice,
   // Simple in-memory cache. Details omitted for brevity.
   private val userCache: UserCache
) {
   fun getUser(userId: String): LiveData<User> {
       val cached : LiveData<User> = userCache.get(userId)
       if (cached != null) {
           return cached
       }
       val data = MutableLiveData<User>()
       // The LiveData object is currently empty, but it's okay to add it to the
       // cache here because it will pick up the correct data once the query
       // completes.
       userCache.put(userId, data)
       // This implementation is still suboptimal but better than before.
       // A complete implementation also handles error cases.
       webservice.getUser(userId).enqueue(object : Callback<User> {
           override fun onResponse(call: Call<User>, response: Response<User>) {
               data.value = response.body()
           }

           // Error case is left out for brevity.
           override fun onFailure(call: Call<User>, t: Throwable) {
               TODO()
           }
       })
       return data
   }
}


Persist Data

Using our current implementation, if the user rotates the device or leaves and immediately returns to the app, the existing UI becomes visible instantly because the repository retrieves data from our in-memory cache.

However, what happens if the user leaves the app and comes back hours later, after the Android OS has killed the process? By relying on our current implementation in this situation, we need to fetch the data again from the network. This refetching process isn’t just a bad user experience; it’s also wasteful because it consumes valuable mobile data.

We could fix this issue by caching the web requests, but that creates a key new problem: What happens if the same user data shows up from another type of request, such as fetching a list of friends? The app would show inconsistent data, which is confusing at best. For example, our app might show two different versions of the same user’s data if the user made the list-of-friends request and the single-user request at different times. Our app would need to figure out how to merge this inconsistent data.

The proper way to handle this situation is to use a persistent model. This is where the Room persistence library comes to the rescue.

Room is an object-mapping library that provides local data persistence with minimal boilerplate code. At compile time, it validates each query against your data schema, so broken SQL queries result in compile-time errors instead of runtime failures. Room abstracts away some of the underlying implementation details of working with raw SQL tables and queries. It also allows you to observe changes to the database’s data, including collections and join queries, exposing such changes using LiveData objects. It even explicitly defines execution constraints that address common threading issues, such as accessing storage on the main thread.

To use Room, we need to define our local schema. First, we add the @Entity annotation to our User data model class and a @PrimaryKey annotation to the class’s id field. These annotations mark User as a table in our database and id as the table’s primary key:

User

@Entity
data class User(
   @PrimaryKey private val id: String,
   private val name: String,
   private val lastName: String
)

Then, we create a database class by implementing RoomDatabase for our app:

UserDatabase

@Database(entities = [User::class], version = 1)
abstract class UserDatabase : RoomDatabase()

Notice that UserDatabase is abstract. Room automatically provides an implementation of it.

We now need a way to insert user data into the database. For this task, we create a data access object (DAO).

UserDao

@Dao
interface UserDao {
   @Insert(onConflict = REPLACE)
   fun save(user: User)

   @Query("SELECT * FROM user WHERE id = :userId")
   fun load(userId: String): LiveData<User>
}

Notice that the load method returns an object of type LiveData<User>. Room knows when the database is modified and automatically notifies all active observers when the data changes. Because Room uses LiveData, this operation is efficient; it updates the data only when there is at least one active observer.

Room checks invalidations based on table modifications, which means it may dispatch false positive notifications.

With our UserDao class defined, we then reference the DAO from our database class:

UserDatabase

@Database(entities = [User::class], version = 1)
abstract class UserDatabase : RoomDatabase() {
   abstract fun userDao(): UserDao
}

Now we can modify our UserRepository to incorporate the Room data source:

// Informs Dagger that this class should be constructed only once.
@Singleton
class UserRepository @Inject constructor(
   private val webservice: Webservice,
   // Simple in-memory cache. Details omitted for brevity.
   private val executor: Executor,
   private val userDao: UserDao
) {
   fun getUser(userId: String): LiveData<User> {
       refreshUser(userId)
       // Returns a LiveData object directly from the database.
       return userDao.load(userId)
   }

   private fun refreshUser(userId: String) {
       // Runs in a background thread.
       executor.execute {
           // Check if user data was fetched recently.
           val userExists = userDao.hasUser(FRESH_TIMEOUT)
           if (!userExists) {
               // Refreshes the data.
               val response = webservice.getUser(userId).execute()

               // Check for errors here.

               // Updates the database. The LiveData object automatically
               // refreshes, so we don't need to do anything else here.
               userDao.save(response.body()!!)
           }
       }
   }

   companion object {
       val FRESH_TIMEOUT = TimeUnit.DAYS.toMillis(1)
   }
}

Notice that even though we changed where the data comes from in UserRepository, we didn’t need to change our UserProfileViewModel or UserProfileFragment. This small-scoped update demonstrates the flexibility that our app’s architecture provides. It’s also great for testing, because we can provide a fake UserRepository and test our production UserProfileViewModel at the same time.

If users wait a few days before returning to an app that uses this architecture, it’s likely that they’ll see out-of-date information until the repository can fetch updated information. Depending on our use case, we may not want to show this out-of-date information. Instead, we can display placeholder data, which shows example values and indicates that our app is currently fetching and loading up-to-date information.

Single source of truth

It’s common for different REST API endpoints to return the same data. For example, if our backend has another endpoint that returns a list of friends, the same user object could come from two different API endpoints, maybe even using different levels of granularity. If the UserRepository were to return the response from the Webservice request as-is, without checking for consistency, our UIs could show confusing information because the version and format of data from the repository would depend on the endpoint most recently called.

For this reason, our UserRepository implementation saves web service responses into the database. Changes to the database then trigger callbacks on active LiveData objects. Using this model, the database serves as the single source of truth, and other parts of the app access it using our UserRepository. Regardless of whether we use a disk cache, we recommend that our repository designate a data source as the single source of truth for the rest of your app.


Show in-progress operations

In some use cases, such as pull-to-refresh, it’s important for the UI to show the user that there’s currently a network operation in progress. It’s good practice to separate the UI action from the actual data because the data might be updated for various reasons. For example, if we fetched a list of friends, the same user might be fetched again programmatically, triggering a LiveData<User> update. From the UI’s perspective, the fact that there’s a request in flight is just another data point, similar to any other piece of data in the User object itself.

We can use one of the following strategies to display a consistent data-updating status in the UI, regardless of where the request to update the data came from:

  • Change getUser() to return an object of type LiveData. This object would include the status of the network operation.
    For an example, see the NetworkBoundResource implementation in the android-architecture-components GitHub project.
  • Provide another public function in the UserRepository class that can return the refresh status of the User. This option is better if you want to show the network status in your UI only when the data-fetching process originated from an explicit user action, such as pull-to-refresh.


Test Each Component

In the separation of concerns section, we mentioned that one key benefit of following this principle is testability.

The following list shows how to test each code module from our extended example:

  • User interface and interactions: Use an Android UI instrumentation test. The best way to create this test is to use the Espresso library. We can create the fragment and provide it a mock UserProfileViewModel. Because the fragment communicates only with the UserProfileViewModel, mocking this one class is sufficient to fully test your app’s UI.
  • ViewModel: We can test the UserProfileViewModel class using a JUnit test. We only need to mock one class, UserRepository.
  • UserRepository: We can test the UserRepository using a JUnit test, as well. We need to mock the Webserviceand the UserDao. In these tests, verify the following behavior:
    • The repository makes the correct web service calls.
    • It saves results into the database.
    • The repository doesn’t make unnecessary requests if the data is cached and up to date.
  • Because both Webservice and UserDao are interfaces, we can mock them or create fake implementations for more complex test cases.
  • UserDao: Test DAO classes using instrumentation tests. Because these instrumentation tests don’t require any UI components, they run quickly. For each test, create an in-memory database to ensure that the test doesn’t have any side effects, such as changing the database files on disk.
  • Webservice: In these tests, avoid making network calls to our backend. It’s important for all tests, especially web-based ones, to be independent from the outside world. Several libraries, including MockWebServer, can help we create a fake local server for these tests.
  • Testing Artifacts: Architecture Components provides a maven artifact to control its background threads. The androidx.arch.core:core-testing artifact contains the following JUnit rules:
    • InstantTaskExecutorRule: Use this rule to instantly execute any background operation on the calling thread.
    • CountingTaskExecutorRule: Use this rule to wait on background operations of Architecture Components. You can also associate this rule with Espresso as an idling resource.


Best Practices

Programming is a creative field, and building Android apps isn’t an exception. There are many ways to solve a problem, be it communicating data between multiple activities or fragments, retrieving remote data and persisting it locally for offline mode, or any number of other common scenarios that nontrivial apps encounter.

Although the following recommendations aren’t mandatory, it has been our experience that following them makes your code base more robust, testable, and maintainable in the long run:


1. Avoid designating our app’s entry points—such as activities, services, and broadcast receivers—as sources of data.

Instead, they should only coordinate with other components to retrieve the subset of data that is relevant to that entry point. Each app component is rather short-lived, depending on the user’s interaction with their device and the overall current health of the system.


2. Create well-defined boundaries of responsibility between various modules of our app.

For example, don’t spread the code that loads data from the network across multiple classes or packages in your code base. Similarly, don’t define multiple unrelated responsibilities—such as data caching and data binding—into the same class.


3. Expose as little as possible from each module.

Don’t be tempted to create “just that one” shortcut that exposes an internal implementation detail from one module. We might gain a bit of time in the short term, but we then incur technical debt many times over as our codebase evolves.


4. Consider how to make each module testable in isolation.

For example, having a well-defined API for fetching data from the network makes it easier to test the module that persists that data in a local database. If, instead, we mix the logic from these two modules in one place, or distribute our networking code across our entire code base, it becomes much more difficult—if not impossible—to test.


5. Focus on the unique core of our app so it stands out from other apps.

Don’t reinvent the wheel by writing the same boilerplate code again and again. Instead, focus our time and energy on what makes our app unique, and let the Android Architecture Components and other recommended libraries handle the repetitive boilerplate.


6. Persist as much relevant and fresh data as possible.

That way, users can enjoy our app’s functionality even when their device is in offline mode. Remember that not all of our users enjoy constant, high-speed connectivity.


7. Assign one data source to be the single source of truth.

Whenever our app needs to access this piece of data, it should always originate from this single source of truth.


Exposing Network Status

In this section, we demonstrates how to expose network status using a Resource class that encapsulate both the data and its state.

The following code snippet provides a sample implementation of Resource:

Resource

// A generic class that contains data and status about loading this data.
sealed class Resource<T>(
   val data: T? = null,
   val message: String? = null
) {
   class Success<T>(data: T) : Resource<T>(data)
   class Loading<T>(data: T? = null) : Resource<T>(data)
   class Error<T>(message: String, data: T? = null) : Resource<T>(data, message)
}

Because it’s common to load data from the network while showing the disk copy of that data, it’s good to create a helper class that we can reuse in multiple places. For this example, we create a class called NetworkBoundResource.

The following diagram shows the decision tree for NetworkBoundResource:

It starts by observing the database for the resource. When the entry is loaded from the database for the first time, NetworkBoundResource checks whether the result is good enough to be dispatched or that it should be re-fetched from the network. Note that both of these situations can happen at the same time, given that we probably want to show cached data while updating it from the network.

If the network call completes successfully, it saves the response into the database and re-initializes the stream. If network request fails, the NetworkBoundResource dispatches a failure directly.

The following code snippet shows the public API provided by NetworkBoundResource class for its subclasses:

NetworkBoundResource.kt

// ResultType: Type for the Resource data.
// RequestType: Type for the API response.
abstract class NetworkBoundResource<ResultType, RequestType> {
   // Called to save the result of the API response into the database
   @WorkerThread
   protected abstract fun saveCallResult(item: RequestType)

   // Called with the data in the database to decide whether to fetch
   // potentially updated data from the network.
   @MainThread
   protected abstract fun shouldFetch(data: ResultType?): Boolean

   // Called to get the cached data from the database.
   @MainThread
   protected abstract fun loadFromDb(): LiveData<ResultType>

   // Called to create the API call.
   @MainThread
   protected abstract fun createCall(): LiveData<ApiResponse<RequestType>>

   // Called when the fetch fails. The child class may want to reset components
   // like rate limiter.
   protected open fun onFetchFailed() {}

   // Returns a LiveData object that represents the resource that's implemented
   // in the base class.
   fun asLiveData(): LiveData<ResultType> = TODO()
}

Note these important details about the class’s definition:

  • It defines two type parameters, ResultType and RequestType, because the data type returned from the API might not match the data type used locally.
  • It uses a class called ApiResponse for network requests. ApiResponse is a simple wrapper around the Retrofit2.Call class that convert responses to instances of LiveData.

After creating the NetworkBoundResource, we can use it to write our disk- and network-bound implementations of User in the UserRepository class:

UserRepository

// Informs Dagger that this class should be constructed only once.
@Singleton
class UserRepository @Inject constructor(
   private val webservice: Webservice,
   private val userDao: UserDao
) {
   fun getUser(userId: String): LiveData<User> {
       return object : NetworkBoundResource<User, User>() {
           override fun saveCallResult(item: User) {
               userDao.save(item)
           }

           override fun shouldFetch(data: User?): Boolean {
               return rateLimiter.canFetch(userId) && (data == null || !isFresh(data))
           }

           override fun loadFromDb(): LiveData<User> {
               return userDao.load(userId)
           }

           override fun createCall(): LiveData<ApiResponse<User>> {
               return webservice.getUser(userId)
           }
       }.asLiveData()
   }
}

That’s all about in this article.


Conclusion

In this article, we learned about best practices and recommended architecture for building robust, production-quality apps in Android.

Thanks for reading ! I hope you enjoyed and learned about App Architecture Concept in Android. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find other articles of CoolMonkTechie as below link :

You can also follow the official website and tutorials of Android as below links :

If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

A Short Note – How Does The MVI Work In Android ?

Hello Readers, CoolMonkTechie heartily welcomes you in A Short Note Series (How does the MVI work in Android?).

In this note series, We will understand what MVI architectural pattern is, how it resolves these challenges in android.

Android architectural patterns are growing day by day. As we develop apps, we face new challenges and issues. New patterns will be discovered as we keep solving similar challenges. As Android Developers, we have MVC, MVP, and MVVM as the most commonly used patterns. All of them use an imperative programming approach. With this approach, even though most of our challenges will be resolved, we still face some challenges regarding the thread safety, maintaining states of the application.

So Let’s begin.


What is MVI architecture?

MVI stands for Model-View-Intent. This pattern has been introduced recently in Android. It works based on the principle of unidirectional and cylindrical flow inspired by the Cycle.js framework.

Let’s see what is the role of each component of MVI.

  • Model: Unlike other patterns, In MVI Model represents the state of the UI. i.e. for example UI might have different states like Data Loading, Loaded, Change in UI with user Actions, Errors, User current screen position states. Each state is stored as similar to the object in the model.
  • View: The View in the MVI is our Interfaces, which can be implemented in Activities and fragments. It means to have a container which can accept the different model states and display it as a UI. They use observable intents(Note: This doesn’t represent the Android traditional Intents) to respond to user actions.
  • Intent: Even though this is not an Intent as termed by Android from before. The result of the user actions is passed as an input value to Intents. We can say we will send models as inputs to the Intents, which can load it through Views.


How does the MVI work?

User does an action which will be an Intent → Intent is a state which is an input to model → Model stores state and send the requested state to the View → View Loads the state from Model → Displays to the user.

If we observe, the data will always flow from the user and end with the user through intent. It cannot be the other way, hence its called Unidirectional architecture. If the user does one more action, the same cycle is repeated, hence it is Cyclic.


Advantages and Disadvantages of MVI

Let’s see what are the advantages and disadvantages of MVI.


Advantages of MVI

  • Maintaining state is no more a challenge with this architecture, As it focuses mainly on states.
  • As it is unidirectional, Data flow can be tracked and predicted easily.
  • It ensures thread safety as the state objects are immutable.
  • Easy to debug, As we know the state of the object when the error occurred.
  • It’s more decoupled as each component fulfills its own responsibility.
  • Testing the app also will be easier than we can map the business logic for each state.


Disadvantages of MVI

  • It leads to lots of boilerplate code as we have to maintain a state for each user action.
  • As we know it has to create lots of objects for all the states. This makes it too costly for app memory management.
  • Handling alert states might challenge while we handle configuration changes. For example, if there is no internet we will show the snackbar, On configuration change, it shows the snackbar again as it is the state of the intent. In terms of usability, this has to be handled.


Conclusion

In this note series, we understood about what MVI architectural pattern is, how it resolves these challenges in android.

Thanks for reading! I hope you enjoyed and learned about MVI Concept in Android. Reading is one thing, but the only way to master it is to do it yourself.

Please follow and subscribe us on this blog and support us in any way possible. Also like and share the article with others for spread valuable knowledge.

You can find Other articles of CoolmonkTechie as below link :

You can also follow official website and tutorials of Android as below links :

If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.

Thanks again Reading. HAPPY READING !!???

Exit mobile version