Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, we will understand about Virtual DOM Working structure in ReactJS. We will analyse below topics to understand the Virtual DOM Concepts in ReactJS.
What is Real DOM ?
What Issues in Real DOM ?
What is Virtual DOM?
How Virtual DOM helps to solve issue?
React Elements Vs React Components in Virtual DOM
Real DOM Vs Virtual DOM
A famous quote about learning is :
” An investment in knowledge pays the best interest.”
So Let’s begin.
Real DOM
DOM stands for Document Object Model and is an abstraction of a structured text. For web developers, this text is an HTML code, and the DOM is simply called HTML DOM. Elements of HTML become nodes in the DOM.
So, while HTML is a text, the DOM is an in-memory representation of this text.
“Compare it to a process being an instance of a program. You can have multiple processes of the same one program, just like you can have multiple DOMs of the same HTML (e.g. the same page loaded on many tabs). “
The HTML DOM provides an interface (API) to traverse and modify the nodes. It contains methods like getElementById or removeChild. We usually use JavaScript language to work with the DOM, because… Well, nobody knows why :).
So, whenever we want to dynamically change the content of the web page, we modify the DOM:
#!javascript
var item = document.getElementById("myLI");
item.parentNode.removeChild(item);
document is an abstraction of the root node, while getElementById, parentNode and removeChild are methods from HTML DOM API.
Issues
The HTML DOM is always tree-structured – which is allowed by the structure of HTML document. This is cool because we can traverse trees fairly easily. Unfortunately, easily doesn’t mean quickly here.
The DOM trees are huge nowadays. Since we are more and more pushed towards dynamic web apps (Single Page Applications – SPAs), we need to modify the DOM tree incessantly and a lot. And this is a real performance and development pain.
Consider a DOM made of thousands of divs. Remember, we are modern web developers, our app is very Specific. We have lots of methods that handle events – clicks, submits, type-ins… A typical jQuery-like event handler looks like this:
find every node interested on an event
update it if necessary
Which has two problems:
It’s hard to manage. Imagine that you have to tweak an event handler. If you lost the context, you have to dive really deep into the code to even know what’s going on. Both time-consuming and bug-risky.
It’s inefficient. Do we really need to do all this findings manually? Maybe we can be smarter and tell in advance which nodes are to-be-updated?
Once again, React comes with a helping hand. The solution to problem 1 is declaratives. Instead of low-level techniques like traversing the DOM tree manually, you simple declare how a component should look like. React does the low-level job for you – the HTML DOM API methods are called under the hood. React doesn’t want you to worry about it – eventually, the component will look like it should.
But this doesn’t solve the performance issue. And this is exactly where the Virtual DOM comes into action.
Virtual DOM
In React, for every DOM object, there is a corresponding “virtual DOM object.” A virtual DOM object is a representation of a DOM object, like a lightweight copy.
A virtual DOM object has the same properties as a real DOM object, but it lacks the real thing’s power to directly change what’s on the screen.
A virtual DOM is a lightweight JavaScript object which originally is just the copy of the real DOM. It is a node tree that lists the elements, their attributes and content as Objects and their properties. React’s render function creates a node tree out of the React components. It then updates this tree in response to the mutations in the data model which is caused by various actions done by the user or by the system.
Manipulating the DOM is slow. Manipulating the virtual DOM is much faster, because nothing gets drawn onscreen. Think of manipulating the virtual DOM as editing a blueprint, as opposed to moving rooms in an actual house.
How it helps
When you render a JSX element, every single virtual DOM object gets updated.
This sounds incredibly inefficient, but the cost is insignificant because the virtual DOM can update so quickly.
Once the virtual DOM has updated, then React compares the virtual DOM with a virtual DOM snapshot that was taken right before the update.
By comparing the new virtual DOM with a pre-update version, React figures out exactly which virtual DOM objects have changed. This process is called “diffing.”
Once React knows which virtual DOM objects have changed, then React updates those objects, and only those objects, on the real DOM. In our example from earlier, React would be smart enough to rebuild your one checked-off list-item, and leave the rest of your list alone.
This makes a big difference! React can update only the necessary parts of the DOM. React’s reputation for performance comes largely from this innovation.
In summary, here’s what happens when you try to update the DOM in React:
The entire virtual DOM gets updated.
The virtual DOM gets compared to what it looked like before you updated it. React figures out which objects have changed.
The changed objects, and the changed objects only, get updated on the real DOM.
Changes on the real DOM cause the screen to change.
ReactElement Vs ReactComponent
When we are talking about the virtual DOM, it’s important to see the difference between these two.
ReactElement
This is the primary type in React. React docs say:
” A ReactElement is a light, stateless, immutable, virtual representation of a DOM Element.“
ReactElements lives in the virtual DOM. They make the basic nodes here. Their immutability makes them easy and fast to compare and update. This is the reason of great React performance.
What can be a ReactElement? Almost every HTML tag – div, table, strong.
Once defined, ReactElements can be render into the “real” DOM. This is the moment when React ceases to control the elements. They become slow, boring DOM nodes:
#!javascript
var root = React.createElement('div');
ReactDOM.render(root, document.getElementById('example'));
// If you are surprised by the fact that `render`
// comes from `ReactDOM` package, see the Post Script.
JSX compiles HTML tags to ReactElements. So this is equivalent to the above:
#!javascript
var root = <div />;
ReactDOM.render(root, document.getElementById('example'));
Once again – ReactElements are the basic items in React virtual DOM. However, they are stateless, therefore don’t seem to be very helpful for us, the programmers. We would rather work on the class-like pieces of HTML, with kind-of-variables and kind-of-constants – don’t we? And here we come to…
ReactComponent
What differs ReactComponent from ReactElement is – ReactComponents are stateful.
We usually use React.createClass method to define one:
#!javascript
var CommentBox = React.createClass({
render: function() {
return (
<div className="commentBox">
Hello, world! I am a CommentBox.
</div>
);
}
});
Your HTML-like blocks returned from render method can have a state. And the best thing is whenever the state changes, the component is re-rendered:
ReactComponents turned out to be a great tool for designing dynamic HTML. They don’t have the access to the virtual DOM, but they can be easily converted to ReactElements:
#!javascript
var element = React.createElement(MyComponent);
// or equivalently, with JSX
var element = <MyComponent />;
What makes the difference?
ReactComponents are great. They are easy to manage. But they have no access to the virtual DOM – and we would like to do as much as possible there.
Whenever a ReactComponent is changing the state, we want to make as little changes to the “real” DOM as possible. So this is how React deals with it. The ReactComponent is converted to the ReactElement. Now the ReactElement can be inserted to the virtual DOM, compared and updated fast and easily. How exactly – well, that’s the job of the diff algorithm. The point is – it’s done faster than it would be in the “regular” DOM.
When React knows the diff – it’s converted to the low-level (HTML DOM) code, which is executed in the DOM. This code is optimised per browser.
Real DOM Vs Virtual DOM
Now we can say about the difference between Real DOM and Virtual DOM that :
Real DOM is a language-neutral interface allowing programs and scripts to dynamically access and update multiple objects like content, structure, and style of a document. while Virtual DOM is a collection of modules designed to provide a declarative way to represent the DOM for an application.
The Real DOM represents the document as nodes and objects, while A virtual DOM object is a representation of a DOM object, like a lightweight copy.
Real DOM is an object-oriented representation of a web page, modified with a scripting language like JavaScript , while Virtual DOM is ideal for mobile-first applications.
Real DOM updates Slow while Virtual DOM updates faster.
Real DOM can directly update HTML, while Virtual DOM can’t directly update HTML.
Real DOM creates a new DOM if element updates, while Virtual DOM updates the JSX if element updates.
With Real DOM, DOM manipulation is very expensive, while DOM manipulation is very easy with Virtual DOM.
Real DOM includes Too much of memory wastage, while Virtual DOM includes no memory wastage.
That’s all about in this article.
Conclusion
In this article, We understood about about Virtual DOM Working structure in ReactJS.
Thanks for reading ! I hope you enjoyed and learned about the Virtual DOM concepts in ReactJS. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in Technology Best Tips and Tricks Series.
In this series article, we will discuss about the nine best practices for code review. Code reviews are important. They improve code quality. They make your codebase more stable. And they help programmers build relationships and work together more effectively.
But reviewing a peer’s code is easier said than done. And running a review process can be a nightmare for team leads. For that reason, we explain what to look for in a code review, the code review process, and what are the nine best practices for code review.
A Famous quote about Programming is :
“Experience is the name everyone gives to their mistakes.“
Here are the nine best practices for code review:
Know What to Look for in a Code Review
Build and Test — Before Review
Don’t Review Code for Longer Than 60 Minutes
Check No More Than 400 Lines at a Time
Give Feedback That Helps (Not Hurts)
Communicate Goals and Expectations
Include Everyone in the Code Review Process
Foster a Positive Culture
Automate to Save Time
1. Know What to Look for in a Code Review
It’s important to go into reviews knowing what to look for. Look for key things, such as…
Structure
Style
Logic
Performance
Test coverage
Design
Readability (and maintainability)
Functionality
We can do automated checks (e.g., static analysis) for some of the things — e.g., structure and logic. But others — e.g., design and functionality — require a human reviewer to evaluate.
Reviewing code with certain questions in mind can help us focus on the right things. For instance, we might evaluate code to answer:
Do we understand what the code does?
Does the code function as we expect it to?
Does this code fulfill regulatory requirements?
“By evaluating code critically — with questions in mind — we’ll make sure we check for the right things. And we’ll reduce time when it comes to testing.”
2. Build and Test — Before Code Review
In today’s era of Continuous Integration (CI), it’s key to build and test before doing a manual review. Ideally, after tests have passed, we’ll conduct a review and deploy it to the dev code line.
This ensures stability. And doing automated checks first will cut down on errors and save time in the review process.
“Automation keeps you from wasting time in reviews.”
3. Don’t Review Code For Longer Than 60 Minutes
Never review for longer than 60 minutes at a time. Performance and attention-to-detail tend to drop off after that point. It’s best to conduct reviews often (and in short sessions).
Taking a break will give our brain a chance to reset. So, we can review it again with fresh eyes.
“ Giving ourself time to do short, frequent reviews will help us improve the quality of the codebase.”
4. Check No More Than 400 Lines at a Time
If we try to review too many lines of code at once, we’re less likely to find defects. Try to keep each review session to 400 lines or less. Setting a line-of-code (LOC) limit is important for the same reasons as setting a time limit. It ensures we are at our best when reviewing the code.
” Focusing on fewer than 400 lines makes our reviews more effective. And it helps us ensure higher quality in the codebase.“
5. Give Feedback That Helps (Not Hurts)
Try to be constructive in our feedback, rather than critical. We can do this by asking questions, rather than making statements. And remember to give praise alongside our constructive feedback.
Giving feedback in-person (or even doing your review in-person) will help us communicate with the right tone.
” Our code will always need to be reviewed. And we’ll always need to review our coworkers’ code. When we approach reviews as a learning process, everyone wins.“
6. Communicate Goals and Expectations
We should be clear on what the goals of the review are, as well as the expectations of reviewers. Giving our reviewers a checklist will ensure that the reviews are consistent. Programmers will evaluate each other’s code with the same criteria in mind.
” By communicating goals and expectations, everyone saves time. Reviewers will know what to look for — and they’ll be able to use their time wisely in the review process.”
7. Include Everyone in the Code Review Process
No matter how senior the programmer is, everyone needs to review and be reviewed. After all, everyone performs better when they know someone else will be looking at their work.
When we’re running reviews, it’s best to include both another engineer and the software architect. They’ll spot different issues in the code, in relation to both the broader codebase and the overall design of the product.
” Including everyone in the review process improves collaboration and relationships between programmers.”
8. Foster a Positive Culture
Fostering a positive culture around reviews is important, as they play a vital role in product quality. It doesn’t matter who introduced the error. What matters is the bug was caught before it went into the product. And that should be celebrated.
” By fostering a positive culture, you’ll help your team appreciate (rather than dread) reviews. “
9. Automate to Save Time
There are some things that reviewers will need to check in manual reviews. But there are some things that can be checked automatically using the right tools.
Static code analyzers, for instance, find potential issues in code by checking it against coding rules. Running static analyzers over the code minimizes the number of issues that reach the peer review phase. Using tools for lightweight reviews can help, too.
” By using automated tools, you can save time in peer review process. This frees up reviewers to focus on the issues that tools can’t find — like usability.”
That’s all about in this article.
Conclusion
In this article, We understood about the nine best practices for Code Review.
Thanks for reading ! I hope you enjoyed and learned about nine best practices steps for Code Review. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, We will learn about How to work Automatic Reference Counting (ARC) in Swift. We will discuss about Strong Reference Cycles problem and solution between Class Instances and for closures in Swift. Memory management is the core concept in any programming language. For more information about Memory Management Concepts, you can explore the below articles :
iOS – Why Is Advanced IOS Memory Management Valuable In Swift ?
A famous quote about learning is :
” Education is not the filling of a pot but the lighting of a fire.“
So Let’s begin.
Swift uses Automatic Reference Counting (ARC) to track and manage your app’s memory usage. In most cases, this means that memory management “just works” in Swift, and we do not need to think about memory management ourself. ARC automatically frees up the memory used by class instances when those instances are no longer needed.
However, in a few cases ARC requires more information about the relationships between parts of our code in order to manage memory for us. We describes those situations and shows how we enable ARC to manage all of our app’s memory.
Reference counting applies only to instances of classes. Structures and enumerations are value types, not reference types, and are not stored and passed by reference.
How To Work ARC ?
Every time we create a new instance of a class, ARC allocates a chunk of memory to store information about that instance. This memory holds information about the type of the instance, together with the values of any stored properties associated with that instance.
Additionally, when an instance is no longer needed, ARC frees up the memory used by that instance so that the memory can be used for other purposes instead. This ensures that class instances do not take up space in memory when they are no longer needed.
However, if ARC were to deallocate an instance that was still in use, it would no longer be possible to access that instance’s properties, or call that instance’s methods. Indeed, if we tried to access the instance, our app would most likely crash.
To make sure that instances don’t disappear while they are still needed, ARC tracks how many properties, constants, and variables are currently referring to each class instance. ARC will not deallocate an instance as long as at least one active reference to that instance still exists.
To make this possible, whenever we assign a class instance to a property, constant, or variable, that property, constant, or variable makes a strong reference to the instance. The reference is called a “strong” reference because it keeps a firm hold on that instance, and does not allow it to be deallocated for as long as that strong reference remains.
Example :
Here’s an example of how Automatic Reference Counting works. This example starts with a simple class called Person, which defines a stored constant property called name:
class Person {
let name: String
init(name: String) {
self.name = name
print("\(name) is being initialized")
}
deinit {
print("\(name) is being deinitialized")
}
}
The Person class has an initializer that sets the instance’s name property and prints a message to indicate that initialization is underway. The Person class also has a deinitializer that prints a message when an instance of the class is deallocated.
The next code snippet defines three variables of type Person?, which are used to set up multiple references to a new Person instance in subsequent code snippets. Because these variables are of an optional type (Person?, not Person), they are automatically initialized with a value of nil, and do not currently reference a Person instance.
var reference1: Person?
var reference2: Person?
var reference3: Person?
We can now create a new Person instance and assign it to one of these three variables:
reference1 = Person(name: "John Appleseed")
// Prints "John Appleseed is being initialized"
Note that the message "John Appleseed is being initialized" is printed at the point that we call the Person class’s initializer. This confirms that initialization has taken place.
Because the new Person instance has been assigned to the reference1 variable, there’s now a strong reference from reference1 to the new Person instance. Because there’s at least one strong reference, ARC makes sure that this Person is kept in memory and is not deallocated.
If we assign the same Person instance to two more variables, two more strong references to that instance are established:
reference2 = reference1
reference3 = reference1
There are now three strong references to this single Person instance.
If we break two of these strong references (including the original reference) by assigning nil to two of the variables, a single strong reference remains, and the Person instance is not deallocated:
reference1 = nil
reference2 = nil
ARC does not deallocate the Person instance until the third and final strong reference is broken, at which point it’s clear that you are no longer using the Person instance:
reference3 = nil
// Prints "John Appleseed is being deinitialized"
How To Cause Strong Reference Cycles Between Class Instances ?
In the examples above, ARC is able to track the number of references to the new Person instance we create and to deallocate that Person instance when it’s no longer needed.
However, it’s possible to write code in which an instance of a class never gets to a point where it has zero strong references. This can happen if two class instances hold a strong reference to each other, such that each instance keeps the other alive. This is known as a Strong Reference Cycle.
We resolve strong reference cycles by defining some of the relationships between classes as weak or unowned references instead of as strong references. However, before we learn how to resolve a strong reference cycle, it’s useful to understand how such a cycle is caused.
Here’s an example of how a strong reference cycle can be created by accident. This example defines two classes called Person and Apartment, which model a block of apartments and its residents:
class Person {
let name: String
init(name: String) { self.name = name }
var apartment: Apartment?
deinit { print("\(name) is being deinitialized") }
}
class Apartment {
let unit: String
init(unit: String) { self.unit = unit }
var tenant: Person?
deinit { print("Apartment \(unit) is being deinitialized") }
}
Every Person instance has a name property of type String and an optional apartment property that is initially nil. The apartment property is optional, because a person may not always have an apartment.
Similarly, every Apartment instance has a unit property of type String and has an optional tenant property that is initially nil. The tenant property is optional because an apartment may not always have a tenant.
Both of these classes also define a deinitializer, which prints the fact that an instance of that class is being deinitialized. This enables us to see whether instances of Person and Apartment are being deallocated as expected.
This next code snippet defines two variables of optional type called john and unit4A, which will be set to a specific Apartment and Person instance below. Both of these variables have an initial value of nil, by virtue of being optional:
var john: Person?
var unit4A: Apartment?
We can now create a specific Person instance and Apartment instance and assign these new instances to the john and unit4A variables:
john = Person(name: "John Appleseed")
unit4A = Apartment(unit: "4A")
Here’s how the strong references look after creating and assigning these two instances. The john variable now has a strong reference to the new Person instance, and the unit4A variable has a strong reference to the new Apartment instance:
We can now link the two instances together so that the person has an apartment, and the apartment has a tenant. Note that an exclamation point (!) is used to unwrap and access the instances stored inside the john and unit4A optional variables, so that the properties of those instances can be set:
john!.apartment = unit4A
unit4A!.tenant = john
Here’s how the strong references look after we link the two instances together:
Unfortunately, linking these two instances creates a strong reference cycle between them. The Person instance now has a strong reference to the Apartment instance, and the Apartment instance has a strong reference to the Person instance. Therefore, when you break the strong references held by the john and unit4A variables, the reference counts do not drop to zero, and the instances are not deallocated by ARC:
john = nil
unit4A = nil
Note that neither deinitializer was called when we set these two variables to nil. The strong reference cycle prevents the Person and Apartment instances from ever being deallocated, causing a memory leak in your app.
Here’s how the strong references look after we set the john and unit4A variables to nil:
The strong references between the Person instance and the Apartment instance remain and cannot be broken.
How To Solve Strong Reference Cycles Between Class Instances ?
Swift provides two ways to resolve strong reference cycles when we work with properties of class type: weak references and unowned references.
Weak and unowned references enable one instance in a reference cycle to refer to the other instance without keeping a strong hold on it. The instances can then refer to each other without creating a strong reference cycle.
Use a weak reference when the other instance has a shorter lifetime—that is, when the other instance can be deallocated first. In the Apartment example above, it’s appropriate for an apartment to be able to have no tenant at some point in its lifetime, and so a weak reference is an appropriate way to break the reference cycle in this case. In contrast, use an unowned reference when the other instance has the same lifetime or a longer lifetime.
Weak References
A weak reference is a reference that does not keep a strong hold on the instance it refers to, and so does not stop ARC from disposing of the referenced instance. This behavior prevents the reference from becoming part of a strong reference cycle. We indicate a weak reference by placing the weak keyword before a property or variable declaration.
Because a weak reference does not keep a strong hold on the instance it refers to, it’s possible for that instance to be deallocated while the weak reference is still referring to it. Therefore, ARC automatically sets a weak reference to nil when the instance that it refers to is deallocated. And, because weak references need to allow their value to be changed to nil at runtime, they are always declared as variables, rather than constants, of an optional type.
We can check for the existence of a value in the weak reference, just like any other optional value, and you will never end up with a reference to an invalid instance that no longer exists.
“Property observers aren’t called when ARC sets a weak reference to nil.”
The example below is identical to the Person and Apartment example from above, with one important difference. This time around, the Apartment type’s tenant property is declared as a weak reference:
class Person {
let name: String
init(name: String) { self.name = name }
var apartment: Apartment?
deinit { print("\(name) is being deinitialized") }
}
class Apartment {
let unit: String
init(unit: String) { self.unit = unit }
weak var tenant: Person?
deinit { print("Apartment \(unit) is being deinitialized") }
}
The strong references from the two variables (john and unit4A) and the links between the two instances are created as before:
var john: Person?
var unit4A: Apartment?
john = Person(name: "John Appleseed")
unit4A = Apartment(unit: "4A")
john!.apartment = unit4A
unit4A!.tenant = john
Here’s how the references look now that you’ve linked the two instances together:
The Person instance still has a strong reference to the Apartment instance, but the Apartment instance now has a weak reference to the Person instance. This means that when we break the strong reference held by the john variable by setting it to nil, there are no more strong references to the Person instance:
john = nil
// Prints "John Appleseed is being deinitialized"
Because there are no more strong references to the Person instance, it’s deallocated and the tenant property is set to nil:
The only remaining strong reference to the Apartment instance is from the unit4A variable. If we break that strong reference, there are no more strong references to the Apartment instance:
unit4A = nil
// Prints "Apartment 4A is being deinitialized"
Because there are no more strong references to the Apartment instance, it too is deallocated:
“In systems that use garbage collection, weak pointers are sometimes used to implement a simple caching mechanism because objects with no strong references are deallocated only when memory pressure triggers garbage collection. However, with ARC, values are deallocated as soon as their last strong reference is removed, making weak references unsuitable for such a purpose.”
Unowned References
Like a weak reference, an unowned reference does not keep a strong hold on the instance it refers to. Unlike a weak reference, however, an unowned reference is used when the other instance has the same lifetime or a longer lifetime. We indicate an unowned reference by placing the unowned keyword before a property or variable declaration.
Unlike a weak reference, an unowned reference is expected to always have a value. As a result, marking a value as unowned doesn’t make it optional, and ARC never sets an unowned reference’s value to nil.
“Use an unowned reference only when you are sure that the reference always refers to an instance that has not been deallocated. If we try to access the value of an unowned reference after that instance has been deallocated, we’ll get a runtime error.“
The following example defines two classes, Customer and CreditCard, which model a bank customer and a possible credit card for that customer. These two classes each store an instance of the other class as a property. This relationship has the potential to create a strong reference cycle.
The relationship between Customer and CreditCard is slightly different from the relationship between Apartment and Person seen in the weak reference example above. In this data model, a customer may or may not have a credit card, but a credit card will always be associated with a customer. A CreditCard instance never outlives the Customer that it refers to. To represent this, the Customer class has an optional card property, but the CreditCard class has an unowned (and non-optional) customer property.
Furthermore, a new CreditCard instance can only be created by passing a number value and a customer instance to a custom CreditCard initializer. This ensures that a CreditCard instance always has a customer instance associated with it when the CreditCard instance is created.
Because a credit card will always have a customer, we define its customer property as an unowned reference, to avoid a strong reference cycle:
class Customer {
let name: String
var card: CreditCard?
init(name: String) {
self.name = name
}
deinit { print("\(name) is being deinitialized") }
}
class CreditCard {
let number: UInt64
unowned let customer: Customer
init(number: UInt64, customer: Customer) {
self.number = number
self.customer = customer
}
deinit { print("Card #\(number) is being deinitialized") }
}
“The number property of the CreditCard class is defined with a type of UInt64 rather than Int, to ensure that the number property’s capacity is large enough to store a 16-digit card number on both 32-bit and 64-bit systems.”
This next code snippet defines an optional Customer variable called john, which will be used to store a reference to a specific customer. This variable has an initial value of nil, by virtue of being optional:
var john: Customer?
We can now create a Customer instance, and use it to initialize and assign a new CreditCard instance as that customer’s card property:
Here’s how the references look, now that we’ve linked the two instances:
The Customer instance now has a strong reference to the CreditCard instance, and the CreditCard instance has an unowned reference to the Customer instance.
Because of the unowned customer reference, when we break the strong reference held by the john variable, there are no more strong references to the Customer instance:
Because there are no more strong references to the Customer instance, it’s deallocated. After this happens, there are no more strong references to the CreditCard instance, and it too is deallocated:
john = nil
// Prints "John Appleseed is being deinitialized"
// Prints "Card #1234567890123456 is being deinitialized"
The final code snippet above shows that the deinitializers for the Customer instance and CreditCard instance both print their “deinitialized” messages after the john variable is set to nil.
“The examples above show how to use safe unowned references. Swift also provides unsafe unowned references for cases where you need to disable runtime safety checks—for example, for performance reasons. As with all unsafe operations, we take on the responsibility for checking that code for safety. We indicate an unsafe unowned reference by writing unowned(unsafe). If we try to access an unsafe unowned reference after the instance that it refers to is deallocated, our program will try to access the memory location where the instance used to be, which is an unsafe operation.”
Unowned Optional References
We can mark an optional reference to a class as unowned. In terms of the ARC ownership model, an unowned optional reference and a weak reference can both be used in the same contexts. The difference is that when we use an unowned optional reference, we’re responsible for making sure it always refers to a valid object or is set to nil.
Here’s an example that keeps track of the courses offered by a particular department at a school:
class Department {
var name: String
var courses: [Course]
init(name: String) {
self.name = name
self.courses = []
}
}
class Course {
var name: String
unowned var department: Department
unowned var nextCourse: Course?
init(name: String, in department: Department) {
self.name = name
self.department = department
self.nextCourse = nil
}
}
Department maintains a strong reference to each course that the department offers. In the ARC ownership model, a department owns its courses. Course has two unowned references, one to the department and one to the next course a student should take; a course doesn’t own either of these objects. Every course is part of some department so the department property isn’t an optional. However, because some courses don’t have a recommended follow-on course, the nextCourse property is an optional. Here’s an example of using these classes:
let department = Department(name: "Horticulture")
let intro = Course(name: "Survey of Plants", in: department)
let intermediate = Course(name: "Growing Common Herbs", in: department)
let advanced = Course(name: "Caring for Tropical Plants", in: department)
intro.nextCourse = intermediate
intermediate.nextCourse = advanced
department.courses = [intro, intermediate, advanced]
The code above creates a department and its three courses. The intro and intermediate courses both have a suggested next course stored in their nextCourse property, which maintains an unowned optional reference to the course a student should take after after completing this one.
An unowned optional reference doesn’t keep a strong hold on the instance of the class that it wraps, and so it doesn’t prevent ARC from deallocating the instance. It behaves the same as an unowned reference does under ARC, except that an unowned optional reference can be nil.
Like non-optional unowned references, we’re responsible for ensuring that nextCourse always refers to a course that hasn’t been deallocated. In this case, for example, when we delete a course from department.courses we also need to remove any references to it that other courses might have.
” The underlying type of an optional value is Optional, which is an enumeration in the Swift standard library. However, optionals are an exception to the rule that value types can’t be marked with unowned. The optional that wraps the class doesn’t use reference counting, so we don’t need to maintain a strong reference to the optional.”
Unowned References and Implicitly Unwrapped Optional Properties
The examples for weak and unowned references above cover two of the more common scenarios in which it’s necessary to break a strong reference cycle.
However, there’s a scenario, in which both properties should always have a value, and neither property should ever be nil once initialization is complete. In this scenario, it’s useful to combine an unowned property on one class with an implicitly unwrapped optional property on the other class.
This enables both properties to be accessed directly (without optional unwrapping) once initialization is complete, while still avoiding a reference cycle. This section shows you how to set up such a relationship.
The example below defines two classes, Country and City, each of which stores an instance of the other class as a property. In this data model, every country must always have a capital city, and every city must always belong to a country. To represent this, the Country class has a capitalCity property, and the City class has a country property:
class Country {
let name: String
var capitalCity: City!
init(name: String, capitalName: String) {
self.name = name
self.capitalCity = City(name: capitalName, country: self)
}
}
class City {
let name: String
unowned let country: Country
init(name: String, country: Country) {
self.name = name
self.country = country
}
}
To set up the interdependency between the two classes, the initializer for City takes a Country instance, and stores this instance in its country property.
The initializer for City is called from within the initializer for Country. However, the initializer for Country cannot pass self to the City initializer until a new Country instance is fully initialized,
To cope with this requirement, we declare the capitalCity property of Country as an implicitly unwrapped optional property, indicated by the exclamation point at the end of its type annotation (City!). This means that the capitalCity property has a default value of nil, like any other optional, but can be accessed without the need to unwrap its value.
Because capitalCity has a default nil value, a new Country instance is considered fully initialized as soon as the Country instance sets its name property within its initializer. This means that the Country initializer can start to reference and pass around the implicit self property as soon as the name property is set. The Country initializer can therefore pass self as one of the parameters for the City initializer when the Country initializer is setting its own capitalCity property.
All of this means that we can create the Country and City instances in a single statement, without creating a strong reference cycle, and the capitalCity property can be accessed directly, without needing to use an exclamation point to unwrap its optional value:
var country = Country(name: "Canada", capitalName: "Ottawa")
print("\(country.name)'s capital city is called \(country.capitalCity.name)")
// Prints "Canada's capital city is called Ottawa"
In the example above, the use of an implicitly unwrapped optional means that all of the two-phase class initializer requirements are satisfied. The capitalCity property can be used and accessed like a non-optional value once initialization is complete, while still avoiding a strong reference cycle.
How To Cause Strong Reference Cycles for Closures ?
A strong reference cycle can also occur if we assign a closure to a property of a class instance, and the body of that closure captures the instance. This capture might occur because the closure’s body accesses a property of the instance, such as self.someProperty, or because the closure calls a method on the instance, such as self.someMethod(). In either case, these accesses cause the closure to “capture” self, creating a strong reference cycle.
This strong reference cycle occurs because closures, like classes, are reference types. When we assign a closure to a property, we are assigning a reference to that closure. In essence, it’s the same problem as above—two strong references are keeping each other alive. However, rather than two class instances, this time it’s a class instance and a closure that are keeping each other alive.
Swift provides an elegant solution to this problem, known as a closure capture list. However, before we learn how to break a strong reference cycle with a closure capture list, it’s useful to understand how such a cycle can be caused.
The example below shows how we can create a strong reference cycle when using a closure that references self. This example defines a class called HTMLElement, which provides a simple model for an individual element within an HTML document:
class HTMLElement {
let name: String
let text: String?
lazy var asHTML: () -> String = {
if let text = self.text {
return "<\(self.name)>\(text)</\(self.name)>"
} else {
return "<\(self.name) />"
}
}
init(name: String, text: String? = nil) {
self.name = name
self.text = text
}
deinit {
print("\(name) is being deinitialized")
}
}
The HTMLElement class defines a name property, which indicates the name of the element, such as "h1" for a heading element, "p" for a paragraph element, or "br" for a line break element. HTMLElement also defines an optional text property, which we can set to a string that represents the text to be rendered within that HTML element.
In addition to these two simple properties, the HTMLElement class defines a lazy property called asHTML. This property references a closure that combines name and text into an HTML string fragment. The asHTML property is of type () -> String, or “a function that takes no parameters, and returns a String value”.
By default, the asHTML property is assigned a closure that returns a string representation of an HTML tag. This tag contains the optional text value if it exists, or no text content if text does not exist. For a paragraph element, the closure would return "<p>some text</p>" or "<p />", depending on whether the text property equals "some text" or nil.
The asHTML property is named and used somewhat like an instance method. However, because asHTML is a closure property rather than an instance method, we can replace the default value of the asHTML property with a custom closure, if we want to change the HTML rendering for a particular HTML element.
For example, the asHTML property could be set to a closure that defaults to some text if the text property is nil, in order to prevent the representation from returning an empty HTML tag:
let heading = HTMLElement(name: "h1")
let defaultText = "some default text"
heading.asHTML = {
return "<\(heading.name)>\(heading.text ?? defaultText)</\(heading.name)>"
}
print(heading.asHTML())
// Prints "<h1>some default text</h1>"
” The asHTML property is declared as a lazy property, because it’s only needed if and when the element actually needs to be rendered as a string value for some HTML output target. The fact that asHTML is a lazy property means that you can refer to self within the default closure, because the lazy property will not be accessed until after initialization has been completed and self is known to exist.”
The HTMLElement class provides a single initializer, which takes a name argument and (if desired) a text argument to initialize a new element. The class also defines a deinitializer, which prints a message to show when an HTMLElement instance is deallocated.
Here’s how you use the HTMLElement class to create and print a new instance:
var paragraph: HTMLElement? = HTMLElement(name: "p", text: "hello, world")
print(paragraph!.asHTML())
// Prints "<p>hello, world</p>"
” The paragraph variable above is defined as an optional HTMLElement, so that it can be set to nil below to demonstrate the presence of a strong reference cycle.”
Unfortunately, the HTMLElement class, as written above, creates a strong reference cycle between an HTMLElement instance and the closure used for its default asHTML value. Here’s how the cycle looks:
The instance’s asHTML property holds a strong reference to its closure. However, because the closure refers to self within its body (as a way to reference self.name and self.text), the closure captures self, which means that it holds a strong reference back to the HTMLElement instance. A strong reference cycle is created between the two.
” Even though the closure refers to self multiple times, it only captures one strong reference to the HTMLElement instance.”
If we set the paragraph variable to nil and break its strong reference to the HTMLElement instance, neither the HTMLElement instance nor its closure are deallocated, because of the strong reference cycle:
paragraph = nil
Note that the message in the HTMLElement deinitializer is not printed, which shows that the HTMLElement instance is not deallocated.
How To Resolve Strong Reference Cycles for Closures ?
We resolve a strong reference cycle between a closure and a class instance by defining a capture list as part of the closure’s definition. A capture list defines the rules to use when capturing one or more reference types within the closure’s body. As with strong reference cycles between two class instances, we declare each captured reference to be a weak or unowned reference rather than a strong reference. The appropriate choice of weak or unowned depends on the relationships between the different parts of our code.
” Swift requires us to write self.someProperty or self.someMethod() (rather than just someProperty or someMethod()) whenever we refer to a member of self within a closure. This helps us remember that it’s possible to capture self by accident.”
Defining a Capture List
Each item in a capture list is a pairing of the weak or unowned keyword with a reference to a class instance (such as self) or a variable initialized with some value (such as delegate = self.delegate). These pairings are written within a pair of square braces, separated by commas.
Place the capture list before a closure’s parameter list and return type if they are provided:
lazy var someClosure = {
[unowned self, weak delegate = self.delegate]
(index: Int, stringToProcess: String) -> String in
// closure body goes here
}
If a closure does not specify a parameter list or return type because they can be inferred from context, place the capture list at the very start of the closure, followed by the in keyword:
lazy var someClosure = {
[unowned self, weak delegate = self.delegate] in
// closure body goes here
}
Weak and Unowned References
Define a capture in a closure as an unowned reference when the closure and the instance it captures will always refer to each other, and will always be deallocated at the same time.
Conversely, define a capture as a weak reference when the captured reference may become nil at some point in the future. Weak references are always of an optional type, and automatically become nil when the instance they reference is deallocated. This enables you to check for their existence within the closure’s body.
” If the captured reference will never become nil, it should always be captured as an unowned reference, rather than a weak reference.”
An unowned reference is the appropriate capture method to use to resolve the strong reference cycle in the HTMLElement example from Strong Reference Cycles for Closures above. Here’s how we write the HTMLElement class to avoid the cycle:
class HTMLElement {
let name: String
let text: String?
lazy var asHTML: () -> String = {
[unowned self] in
if let text = self.text {
return "<\(self.name)>\(text)</\(self.name)>"
} else {
return "<\(self.name) />"
}
}
init(name: String, text: String? = nil) {
self.name = name
self.text = text
}
deinit {
print("\(name) is being deinitialized")
}
}
This implementation of HTMLElement is identical to the previous implementation, apart from the addition of a capture list within the asHTML closure. In this case, the capture list is [unowned self], which means “capture self as an unowned reference rather than a strong reference”.
We can create and print an HTMLElement instance as before:
var paragraph: HTMLElement? = HTMLElement(name: "p", text: "hello, world")
print(paragraph!.asHTML())
// Prints "<p>hello, world</p>"
Here’s how the references look with the capture list in place:
This time, the capture of self by the closure is an unowned reference, and does not keep a strong hold on the HTMLElement instance it has captured. If you set the strong reference from the paragraph variable to nil, the HTMLElement instance is deallocated, as can be seen from the printing of its deinitializer message in the example below:
paragraph = nil
// Prints "p is being deinitialized"
That’s all about in this article.
Conclusion
In this article, We understood that How to work Automatic Reference Counting (ARC) in Swift. We also discussed about Strong Reference Cycles problem and solution between Class Instances and for closures in Swift. We saw how a strong reference cycle can be created when two class instance properties hold a strong reference to each other, and how to use weak and unowned references to break these strong reference cycles.
We understood Strong Reference Cycles with different scenarios below :
The Person and Apartment example shows a situation where two properties, both of which are allowed to be nil, have the potential to cause a strong reference cycle. This scenario is best resolved with a weak reference.
The Customer and CreditCard example shows a situation where one property that is allowed to be nil and another property that cannot be nil have the potential to cause a strong reference cycle. This scenario is best resolved with an unowned reference.
In a third scenario, in which both properties should always have a value, and neither property should ever be nil once initialization is complete. In this scenario, it’s useful to combine an unowned property on one class with an implicitly unwrapped optional property on the other class.
A strong reference cycle can also occur if you assign a closure to a property of a class instance, and the body of that closure captures the instance. Swift provides an elegant solution to this problem, known as a closure capture list.
Thanks for reading ! I hope you enjoyed and learned about the Automatic Reference Counting (ARC) and Strong Reference Cycles problem and solutions with different scenarios in Swift. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, We will learn about what’s beyond the basics of iOS memory management, reference counting and object life cycle. Memory management is the core concept in any programming language. Memory management in iOS was initially non-ARC (Automatic Reference Counting), where we have to retain and release the objects. Now, it supports ARC and we don’t have to retain and release the objects. Xcode takes care of the job automatically in compile time. We will explain Memory management in swift from the compiler perspective. We will discuss the fundamentals and gradually make our way to the internals of ARC and Swift Runtime, answering below questions:
What is Memory Management?
What is Memory Management Issues?
What is Memory Management Rules ?
How Swift compiler implements Automatic Reference Counting?
How ARC Works ?
How to handle Memory in ARC ?
How strong, weak and unowned references are implemented?
What is Swift Runtime ?
What are Side Tables?
What is the life cycle of Swift objects?
What is Reference Count Invariants during Swift object lifecycle ?
A famous quote about learning is :
“An investment in knowledge pays the best interest.”
So, Let’s begin.
What is Memory Management ?
At hardware level, memory is just a long list of bytes. We organized into three virtual parts:
Stack, where all local variables go.
Global data, where static variables, constants and type metadata go.
Heap, where all dynamically allocated objects go. Basically, everything that has a lifetime is stored here.
We’ll continue saying ‘objects’ and ‘dynamically allocated objects’ interchangeably. These are Swift reference types and some special cases of value types.
So We can define Memory Management :
“Memory Management is the process of controlling program’s memory. It is critical to understand how it works, otherwise you are likely to run across random crashes and subtle bugs.”
What is Memory Management Issues?
As per Apple documentation, the two major issues in memory management are:
Freeing or overwriting data that is still in use. It causes memory corruption and typically results in your application crashing, or worse, corrupted user data.
Not freeing data that is no longer in use causes memory leaks. When allocated memory is not freed even though it is never going to be used again, it is known as memory leak. Leaks cause your application to use ever-increasing amounts of memory, which in turn may result in poor system performance or (in iOS) your application being terminated.
What is Memory Management Rules ?
Memory Management Rules are :
We own the objects we create, and we have to subsequently release them when they are no longer needed.
Use Retain to gain ownership of an object that you did not create. You have to release these objects too when they are not needed.
Don’t release the objects that you don’t own.
How Swift compiler implements automatic reference counting?
Memory management is tightly connected with the concept of Ownership. Ownership is the responsibility of some piece of code to eventually cause an object to be destroyed. Any language with a concept of destruction has a concept of ownership. In some languages, like C and non-ARC Objective-C, ownership is managed explicitly by programmers. In other languages, like C++ (in part), ownership is managed by the language. Even languages with implicit memory management still have libraries with concepts of ownership, because there are other program resources besides memory, and it is important to understand what code has the responsibility to release those resources.
Swift already has an ownership system, but it’s “under the covers”: it’s an implementation detail that programmers have little ability to influence.
Automatic reference counting (ARC) is Swift ownership system, which implicitly imposes a set of conventions for managing and transferring ownership.
Swift uses Automatic Reference Counting (ARC) to track and manage your app’s memory usage. In most cases, this means that memory management “just works” in Swift, and you do not need to think about memory management yourself. ARC automatically frees up the memory used by class instances when those instances are no longer needed.
However, in a few cases ARC requires more information about the relationships between parts of your code in order to manage memory for you.
The name by which an object can be pointed is called a reference. Swift references have two levels of strength: strong and weak. Additionally, weak references have a flavour, called unowned.
“The essence of Swift memory management is: Swift preserves an object if it is strongly referenced and deallocates it otherwise. The rest is just an implementation detail.”
How ARC Works ?
Every time you create a new instance of a class, ARC allocates a chunk of memory to store information about that instance. This memory holds information about the type of the instance, together with the values of any stored properties associated with that instance.
Additionally, when an instance is no longer needed, ARC frees up the memory used by that instance so that the memory can be used for other purposes instead. This ensures that class instances do not take up space in memory when they are no longer needed.
However, if ARC were to deallocate an instance that was still in use, it would no longer be possible to access that instance’s properties, or call that instance’s methods. Indeed, if you tried to access the instance, your app would most likely crash.
To make sure that instances don’t disappear while they are still needed, ARC tracks how many properties, constants, and variables are currently referring to each class instance. ARC will not deallocate an instance as long as at least one active reference to that instance still exists.
To make this possible, whenever you assign a class instance to a property, constant, or variable, that property, constant, or variable makes a strong reference to the instance. The reference is called a “strong” reference because it keeps a firm hold on that instance, and does not allow it to be deallocated for as long as that strong reference remains.
Example :
Here’s an example of how Automatic Reference Counting works. This example starts with a simple class called Person, which defines a stored constant property called name:
class Person {
let name: String
init(name: String) {
self.name = name
print("\(name) is being initialized")
}
deinit {
print("\(name) is being deinitialized")
}
}
The Person class has an initializer that sets the instance’s name property and prints a message to indicate that initialization is underway. The Person class also has a deinitializer that prints a message when an instance of the class is deallocated.
The next code snippet defines three variables of type Person?, which are used to set up multiple references to a new Person instance in subsequent code snippets. Because these variables are of an optional type (Person?, not Person), they are automatically initialized with a value of nil, and do not currently reference a Person instance.
var reference1: Person?
var reference2: Person?
var reference3: Person?
You can now create a new Person instance and assign it to one of these three variables:
reference1 = Person(name: "John Appleseed")
// Prints "John Appleseed is being initialized"
Note that the message "John Appleseed is being initialized" is printed at the point that you call the Person class’s initializer. This confirms that initialization has taken place.
Because the new Person instance has been assigned to the reference1 variable, there is now a strong reference from reference1 to the new Person instance. Because there is at least one strong reference, ARC makes sure that this Person is kept in memory and is not deallocated.
If you assign the same Person instance to two more variables, two more strong references to that instance are established:
reference2 = reference1
reference3 = reference1
There are now three strong references to this single Person instance.
If you break two of these strong references (including the original reference) by assigning nil to two of the variables, a single strong reference remains, and the Person instance is not deallocated:
reference1 = nil
reference2 = nil
ARC does not deallocate the Person instance until the third and final strong reference is broken, at which point it’s clear that you are no longer using the Person instance:
reference3 = nil
// Prints "John Appleseed is being deinitialized"
How to handle Memory in ARC ?
You don’t need to use release and retain in ARC. So, all the view controller’s objects will be released when the view controller is removed. Similarly, any object’s sub-objects will be released when they are released. Note that if other classes have a strong reference to an object of a class, then the whole class won’t be released. So, it is recommended to use weak properties for delegates.
How strong, weak and unowned references are implemented?
The purpose of a strong reference is to keep an object alive. Strong referencing might result in several non-trivial problems.
Retain cycles. Considering that Swift language is not cycle-collecting, a reference R to an object which holds a strong reference to the object R (possibly indirectly), results in a reference cycle. We must write lots of boilerplate code to explicitly break the cycle.
It is not always possible to make strong references valid immediately on object construction, e.g. with delegates.
Weak references address the problem of back references. An object can be destroyed if there are weak references pointing to it. A weak reference returns nil, when an object it points to is no longer alive. This is called zeroing.
Unowned references are different flavor of weak, designed for tight validity invariants. Unowned references are non-zeroing. When trying to read a non-existent object by an unowned reference, a program will crash with assertion error. They are useful to track down and fix consistency bugs.
What is Swift Runtime ?
The mechanism of ARC is implemented in a library called Swift Runtime. It implements such core features as the runtime type system, including dynamic casting, generics, and protocol conformance registration.
Swift Runtime represents every dynamically allocated object with HeapObject struct. It contains all the pieces of data which make up an object in Swift: reference counts and type metadata.
Internally every Swift object has three reference counts: one for each kind of reference. At the SIL generation phase, swiftc compiler inserts calls to the methods swift_retain() and swift_release(), wherever it’s appropriate. This is done by intercepting initialization and destruction of HeapObjects.
Compilation is one of the steps of Xcode Build System.
What are Side Tables?
Side Tables are mechanism for implementing Swift weak references.
Typically objects don’t have any weak references, hence it is wasteful to reserve space for weak reference count in every object. This information is stored externally in side tables, so that it can be allocated only when it’s really needed.
Instead of directly pointing to an object, weak reference points to the side table, which in its turn points to the object. This solves two problems:
saves memory for weak reference count, until an object really needs it.
allows to safely zero out weak reference, since it does not directly point to an object, and no longer a subject to race conditions.
Side table is just a reference count + a pointer to an object. They are declared in Swift Runtime as follows (C++ code).
class HeapObjectSideTableEntry {
std::atomic<HeapObject*> object;
SideTableRefCounts refCounts;
// Operations to increment and decrement reference counts
}
What is the life cycle of Swift objects?
Swift objects have their own life cycle, represented by a finite state machine on the figure below. Square brackets indicate a condition that triggers transition from state to state. We will discuss the finite state machines in Eliminating Degenerate View Controller States.
In live state an object is alive. Its reference counts are initialized to 1 strong, 1 unowned and 1 weak (side table starts at +1). Strong and unowned reference access work normally. Once there is a weak reference to the object, the side table is created. The weak reference points to the side table instead of the object.
From the live state, the object moves into the deiniting state once strong reference count reaches zero. The deiniting state means that deinit() is in progress. At this point strong ref operations have no effect. Weak reference reads return nil, if there is an associated side table (otherwise there are no weak refs). Unowned reads trigger assertion failure. New unowned references can still be stored. From this state, the object can take two routes:
A shortcut in case there no weak, unowned references and the side table. The object transitions to the dead state and is removed from memory immediately.
Otherwise, the object moves to deinited state.
In the deinited state deinit() has been completed and the object has outstanding unowned references (at least the initial +1). Strong and weak stores and reads cannot happen at this point. Unowned stores also cannot happen. Unowned reads trigger assertion error. The object can take two routes from here:
In case there are no weak references, the object can be deallocated immediately. It transitions into the dead state.
Otherwise, there is still a side table to be removed and the object moves into the freed state.
In the freed state the object is fully deallocated, but its side table is still alive. During this phase the weak reference count reaches zero and the side table is destroyed. The object transitions into its final state.
In the dead state there is nothing left from the object, except for the pointer to it. The pointer to the HeapObject is freed from the Heap, leaving no traces of the object in memory.
What is Reference Count Invariants during Swift object lifecycle ?
During their life cycle, the objects maintain following invariants:
When the strong reference count becomes zero, the object is deinited. Unowned reference reads raise assertion errors, weak reference reads become nil.
The unowned reference count adds +1 to the strong one, which is decremented after object’s deinit completes.
The weak reference count adds +1 to the unowned reference count. It is decremented after the object is freed from memory.
Conclusion
In this article, We understood about Advanced iOS Memory management in Swift. Automatic reference counting is no magic and the better we understand how it works internally, the less our code is prone to memory management errors. Here are the key points to remember:
Weak references point to side a table. Unowned and strong references point to an object.
Automatic referencing count is implemented on the compiler level. The swiftc compiler inserts calls to release and retain wherever appropriate.
Swift objects are not destroyed immediately. Instead, they undergo 5 phases in their life cycle: live -> deiniting -> deinited -> freed -> dead.
Thanks for reading ! I hope you enjoyed and learned about the Advanced memory management concepts in Swift. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, We will learn about nine useful criteria of A React Component in ReactJS. We want to know how React embraces component-based architecture. We can compose complex user interfaces from smaller pieces, take advantage of components reusability and abstracted DOM manipulations.
We know about React : “Component-based development is productive: a complex system is built from specialized and easy to manage pieces. Yet only well designed components ensure composition and reusability benefits.“
Despite the application complexity, we hurry to meet the deadlines and unexpectedly changing requirements, we must constantly walk on the thin line of architectural correctness. Make our components decoupled, focused on a single task, well tested.
Luckily, reliable components have common characteristics. We will discuss below 9 useful criterias with details.
Single responsibility
Encapsulated
Composable
Reusable
Pure or Almost Pure
Testable and Tested
Meaningful
Do Continuous Improvement
Reliability
A famous quote about learning is :
” Try to learn something about everything and everything about something.”
So Let’s begin.
When writing a React application, We need regularly ask questions ourself:
How to correctly structure the component?
At what point a big component should split into smaller components?
How to design a communication between components that prevents tight coupling?
Because Technical debt making progressively hard to modify existing or create new functionality. This will happen when write big components with many responsibilities, tightly couple components, forget about unit tests. These increase technical debt.
So we can solve this issue with the use of a reliable react component criterias. So we look these criterias one by one.
1. Single responsibility
A fundamental rule to consider when writing React components is the single responsibility principle.
” A component has a single responsibility when it has one reason to change. “
Single responsibility principle (abbreviated SRP) requires a component to have one reason to change.
A component has one reason to change when it implements one responsibility, or simpler when it does one thing.
A responsibility is either to render a list of items, or to show a date picker, or to make an HTTP request, or to draw a chart, or to lazy load an image, etc. Our component should pick only one responsibility and implement it. When we modify the way component implements its responsibility (e.g. a change to limit the number of items for render a list of items responsibility) – it has one reason to change.
Why is it important to have only one reason to change? Because component’s modification becomes isolated and under control.
Having one responsibility restricts the component size and makes it focused on one thing. A component focused on one thing is convenient to code, and later modify, reuse and test.
Let’s follow a few examples.
Example 1 :- A component fetches remote data, correspondingly it has one reason to change when fetch logic changes.
A reason to change happens when:
The server URL is modified
The response format is modified
You want to use a different HTTP requests library
Or any modification related to fetch logic only.
Example 2 :- A table component maps an array of data to a list of row components, as result having one reason to change when mapping logic changes.
A reason to change occurs when:
We have a task to limit the number of rendered row components (e.g. display up to 25 rows)
We’re asked to show a message “The list is empty” when there are no items to display
Or any modification related to mapping of array to row components only.
Does our component have many responsibilities?If the answer is yes, split the component into chunks by each individual responsibility.
An alternative reasoning about the single responsibility principle says to create the component around a clearly distinguishable axis of change. An axis of change attracts modifications of the same meaning.
In the previous 2 examples, the axis of change were fetch logic and mapping logic.
Units written at early project stage will change often until reaching the release stage. These change often components are required to be easily modifiable in isolation: a goal of the SRP.
Case study: make component have one responsibility
Imagine a component that makes an HTTP request to a specialized server to get the current weather. When data is successfully fetched, the same component uses the response to display the weather:
When dealing with alike situations, ask ourself: do we have to split the component into smaller pieces? The question is best answered by determining how component might change according to its responsibilities.
The weather component has 2 reasons to change:
Fetch logic in componentDidMount(): server URL or response format can be modified
Weather visualization in render(): the way component displays the weather can change several times
The solution is to divide <Weather> in 2 components: each having one responsibility. Let’s name the chunks <WeatherFetch> and <WeatherInfo>.
First component <WeatherFetch> is responsible for fetching the weather, extracting response data and saving it to state. It has one fetch logic reason to change:
import axios from 'axios';
// Solution: Make the component responsible only for fetching
class WeatherFetch extends Component {
constructor(props) {
super(props);
this.state = { temperature: 'N/A', windSpeed: 'N/A' };
}
render() {
const { temperature, windSpeed } = this.state;
return (
<WeatherInfo temperature={temperature} windSpeed={windSpeed} />
);
}
componentDidMount() {
axios.get('http://weather.com/api').then(function(response) {
const { current } = response.data;
this.setState({
temperature: current.temperature,
windSpeed: current.windSpeed
});
});
}
}
What benefits brings such structuring?
For instance, we would like to use async/await syntax instead of promises to get the response from server. This is a reason to change related to fetch logic:
// Reason to change: use async/await syntax
class WeatherFetch extends Component {
// ..... //
async componentDidMount() {
const response = await axios.get('http://weather.com/api');
const { current } = response.data;
this.setState({
temperature: current.temperature,
windSpeed: current.windSpeed
});
}
}
Because <WeatherFetch> has one fetch logic reason to change, any modification of this component happens in isolation. Using async/await does not affect directly the way weather is displayed.
Then <WeatherFetch> renders <WeatherInfo>. The latter is responsible only for displaying the weather, having one visual reason to change:
// Solution: Make the component responsible for displaying the weather
function WeatherInfo({ temperature, windSpeed }) {
return (
<div className="weather">
<div>Temperature: {temperature}°C</div>
<div>Wind: {windSpeed} km/h</div>
</div>
);
}
Let’s change <WeatherInfo> that instead of "Wind: 0 km/h" display "Wind: calm". That’s a reason to change related to visual display of weather:
Again, this modification of <WeatherInfo> happens in isolation and does not affect <WeatherFetch> component.
<WeatherFetch> and <WeatherInfo> have their own one responsibility. A change of one component has small effect on the other one. That’s the power of single responsibility principle: modification in isolation that affects lightly and predictability other components of the system.
2. Encapsulated
“An encapsulated component provides props to control its behavior while not exposing its internal structure. “
Coupling is a system characteristic that determines the degree of dependency between components.
Based on the degree of components dependence, 2 coupling types are distinguishable:
Loose couplinghappens when the application components have little or no knowledge about other components.
Tight coupling happens when the application components know a lot of details about each other.
Loose coupling is the goal when designing application’s structure and the relationship between components.
Loose coupling leads to the following benefits:
Allow making changes in one area of the application without affecting others
Any component can be replaced with an alternative implementation
Enables components reusability across the application, thus favoring Don’t repeat yourself principle
Independent components are easier to test, increasing the application code coverage
Contrary, a tightly coupled system looses the benefits described above. The main drawback is the difficulty to modify a component that is highly dependent on other components. Even a single modification might lead to a cascade of dependency echo modifications.
Encapsulation, or Information Hiding, is a fundamental principle of how to design components, and is the key to loose coupling.
Information hiding
A well encapsulated component hides its internal structure and provides a set of props to control its behavior.
Hiding internal structure is essential. Other components are not allowed to know or rely on the component’s internal structure or implementation details.
A React component can be functional or class based, define instance methods, setup refs, have state or use lifecycle methods. These implementation details are encapsulated within the component itself, and other components shouldn’t know anything about these details.
Units that precisely hide their internal structure are less dependent on each other. Lowering the dependency degree brings the benefits of loose coupling.
Communication
Details hiding is a restriction that isolates the component. Nevertheless, we need a way to make components communicate. So welcome the props.
Props are meant to be plain, raw data that are component’s input.
A prop is recommended to be a primitive type (e.g. string, number, boolean):
<Message text="Hello world!" modal={false} />;
When necessary use a complex data structure like objects or arrays:
To avoid breaking encapsulation, watch out the details passed through props. A parent component that sets child props should not expose any details about its internal structure. For example, it’s a bad decision to transmit using props the whole component instance or refs.
Accessing global variables is another problem that negatively affects encapsulation.
Case study: encapsulation restoration
Component’s instance and state object are implementation details encapsulated inside the component. Thus a certain way to break the encapsulation is to pass the parent instance for state management to a child component.
Let’s study such a situation.
A simple application shows a number and 2 buttons. First button increases and second button decreases the number. The application consists of two components: <App> and <Controls>.
<App> holds the state object that contains the modifiable number as a property, and renders this number:
<Controls> renders the buttons and attaches click event handlers to them. When user clicks a button, parent component state is updated (updateNumber() method) by increasing +1 or decreasing -1 the displayed number:
The first problem is <App>’s broken encapsulation, since its internal structure spreads across the application. <App> incorrectly permits <Controls> to update its state directly.
Consequently, the second problem is that <Controls> knows too many details about its parent <App>. It has access to parent instance, knows that parent is a stateful component, knows the state object structure (number property) and knows how to update the state.
The broken encapsulation couples <App> and <Controls> components.
A troublesome outcome is that <Controls> would be complicated to test and reuse. A slight modification to structure of <App> leads to cascade of modifications to <Controls> (and to alike coupled components in case of a bigger application).
The solution is to design a convenient communication interface that respects loose coupling and strong encapsulation. Let’s improve the structure and props of both components in order to restore the encapsulation.
Only the component itself should know its state structure. The state management of <App> should move from <Controls> (updateNumber() method) in the right place: <App> component.
Later, <App> is modified to provide <Controls> with props onIncrease and onDecrease. These are simple callbacks that update <App> state:
Now <Controls> receives callbacks for increasing and decreasing the number. Notice the decoupling and encapsulation restoration moment: <Controls> has no longer the need to access parent instance and modify <App> state directly.
Moreover <Controls> is transformed into a functional component:
// Solution: Use callbacks to update parent state
function Controls({ onIncrease, onDecrease }) {
return (
<div className="controls">
<button onClick={onIncrease}>Increase</button>
<button onClick={onDecrease}>Decrease</button>
</div>
);
}
<App> encapsulation is now restored. The component manages its state by itself, as it should be.
Furthermore <Controls> no longer depends on <App> implementation details. onIncrease and onDecrease prop functions are called when corresponding button is clicked, and <Controls> does not know (and should not know) what happens inside those functions.
<Controls> reusability and testability significantly increased.
The reuse of <Controls> is convenient because it requires only callbacks, without any other dependencies. Testing is also handy: just verify whether callbacks are executed on buttons click.
3. Composable
“A composable component is created from the composition of smaller specialized components.”
Composition is a way to combine components to create a bigger (composed) component. Composition is the heart of React.
Fortunately, composition is easy to understand. Take a set of small pieces, combine them, and create a bigger thing.
Let’s look at a common frontend application composition pattern. The application is composed of a header at the top, footer at the bottom, sidebar on the left and payload content in the middle:
The application demonstrates how well composition builds the application. Such organization is expressive and open for understanding.
React composes components expressively and naturally. The library uses a declarative paradigm that doesn’t suppress the expressiveness of composition. The following components render the described application:
<Application> is composed of <Header>, <Sidebar>, <Content> and <Footer>. <Sidebar> has one component <Menu>, as well as <Content> has one <Article>.
How does composition relate with single responsibility and encapsulation?
” Single responsibility principle describes how to split requirements into components, encapsulation describes how to organize these components, and composition describes how to glue the whole system back. “
Composition benefits
Single responsibility
An important aspect of composition is the ability to compose complex components from smaller specialized components. This divide and conquer approach helps an authority component conform to single responsibility principle.
Recall the previous code snippet. <Application> has the responsibility to render the header, footer, sidebar and main regions.
Makes sense to divide this responsibility into four sub-responsibilities, each of which is implemented by specialized components <Header>, <Sidebar>, <Content> and <Footer>. Later composition glues back <Application> from these specialized components.
Now comes up the benefit. Composition makes <Application> conform to single responsibility principle, by allowing its children to implement the sub-responsibilities.
Reusability
Components using composition can reuse common logic. This is the benefit of reusability.
For instance, components <Composed1> and <Composed2> share common code:
const instance1 = (
<Composed1>
/* Specific to Composed1 code... */
/* Common code... */
</Composed1>
);
const instance2 = (
<Composed2>
/* Common code... */
/* Specific to Composed2 code... */
</Composed2>
);
Since code duplication is a bad practice, how to make components reuse common code?
Firstly, encapsulate common code in a new component <Common>. Secondly, <Composed1> and <Composed2> should use composition to include <Common>, fixing code duplication:
Reusable components favor Don’t repeat yourself (DRY) principle. This beneficial practice saves efforts and time.
Flexibility
In React a composable component can control its children, usually through children prop. This leads to another benefit of flexibility.
For example, a component should render a message depending on user’s device. Use composition’s flexibility to implement this requirement:
function ByDevice({ children: { mobile, other } }) {
return Utils.isMobile() ? mobile : other;
}
<ByDevice>{{
mobile: <div>Mobile detected!</div>,
other: <div>Not a mobile device</div>
}}</ByDevice>
<ByDevice> composed component renders the message "Mobile detected!" for a mobile, and "Not a mobile device" for other devices.
Efficiency
User interfaces are composable hierarchical structures. Thus composition of components is an efficient way to construct user interfaces.
4. Reusable
“A reusable component is written once but used multiple times.”
Imagine a fantasy world where software development is mostly reinventing the wheel.
When coding, we can’t use any existing libraries or utilities. Even across the application we can’t use code that we already wrote.
In such environment, would it be possible to write an application in a reasonable amount of time? Definitely not.
Welcome reusability. Make things work, not reinvent how they work.
Reuse across application
According to Don’t repeat yourself (DRY) principle, every piece of knowledge must have a single, unambiguous, authoritative representation within a system. The principle advises to avoid repetition.
Code repetition increases complexity and maintenance efforts without adding significant value. An update of the logic forces you to modify all its clones within the application.
Repetition problem is solved with reusable components. Write once and use many times: efficient and time saving strategy.
However we don’t get reusability property for free. A component is reusable when it conforms to single responsibility principle and has correct encapsulation.
Conforming to single responsibility is essential:
“Reuse of a component actually means the reuse of its responsibility implementation.”
Components that have only one responsibility are the easiest to reuse.
But when a component incorrectly has multiple responsibilities, its reusage adds a heavy overhead. We want to reuse only one responsibility implementation, but also we get the unneeded implementation of out of place responsibilities.
We want a banana, and we get a banana, plus all the jungle with it.
Correct encapsulation creates a component that doesn’t stuck with dependencies. Hidden internal structure and focused props enable the component to fit nicely in multiple places where it’s about to be reused.
Reuse of 3rd party libraries
A regular working day. We’ve just read the task to add a new feature to the application. Before firing up the text editor, hold on for a few minutes…
There’s a big chance that the problem we start working on is already solved. Due to React’s popularity and great open source community, it worth searching for an existing solution.
Good libraries positively affect architectural decisions and advocate best practices. In my experience, the top influencers are react-router and redux.
react-router uses declarative routing to structure a Single Page Application. Associate a URL path with your component using <Route>. Then router will render the component for you when user visits the matched URL.
redux and react-redux HOC introduce unidirectional and predictable application state management. It extracts async and impure code (like HTTP requests) out of components, favoring single responsibility principle and creating pure or almost-pure components.
To be sure that a 3rd party library is worth using, here’s we need to verify checklist details:
Documentation: verify whether the library has meaningful readme.md file and detailed documentation
Tested: a sign of trustworthy library is high code coverage
Maintenance: see how often the library author creates new features, fixes bugs and generally maintains the library.
5. Pure or Almost-pure
“A pure component always renders same elements for same prop values.”
“An almost-pure component always renders same elements for same prop values, and can produce a side effect.”
In functional programming terms, a pure function always returns the same output for given the same input. Let’s see a simple pure function:
function sum(a, b) {
return a + b;
}
sum(5, 10); // => 15
For given two numbers, sum() function always returns the same sum.
A function becomes impure when it returns different output for same input. It can happen because the function relies on global state. For example:
let said = false;
function sayOnce(message) {
if (said) {
return null;
}
said = true;
return message;
}
sayOnce('Hello World!'); // => 'Hello World!'
sayOnce('Hello World!'); // => null
sayOnce('Hello World!') on first call returns 'Hello World!'.
Even when using same argument 'Hello World!', on later invocations sayOnce() returns null. That’s the sign of an impure function that relies on a global state: said variable.
sayOnce() body has a statement said = true that modifies the global state. This produces a side effect, which is another sign of impure function.
Consequently, pure functions have no side effects and don’t rely on global state. Their single source of truth are parameters. Thus pure functions are predictable and determined, are reusable and straightforward to test.
React components should benefit from pure property. Given the same prop values, a pure component (not to be confused with React.PureComponent) always renders the same elements. Let’s take a look:
function Message({ text }) {
return <div className="message">{text}</div>;
}
<Message text="Hello World!" />
// => <div class="message">Hello World</div>
You are guaranteed that <Message> for the same text prop value renders the same elements.
It’s not always possible to make a component pure. Sometimes we have to ask the environment for information, like in the following case:
<InputField> stateful component doesn’t accept any props, however renders different output depending on what user types into the input. <InputField> has to be impure, because it accesses the environment through input field.
Impure code is a necessary evil. Most of the applications require global state, network requests, local storage and alike. What we can do is isolate impure code from pure, a.k.a. apply purification on our components.
Isolated impure code explicitly shows it has side effects, or rely on global state. Being in isolation, impure code has less unpredictability effect on the rest of the system.
Let’s detail into purification examples.
Case study: purification from global variables
We don’t like global variables. They break encapsulation, create unpredictable behavior and make testing difficult.
Global variables can be used as mutable or immutable (read-only) objects.
Mutating global variables create uncontrolled behavior of components. Data is injected and modified at will, confusing reconciliation process. This is a mistake.
If we need a mutable global state, the solution is a predictable application state management. Consider using Redux.
An immutable (or read-only) usage of globals is often application’s configuration object. This object contains the site name, logged-in user name or any other configuration information.
The following statement defines a configuration object that holds the site name:
export const globalConfig = {
siteName: 'Animals in Zoo'
};
Next, <Header> component renders the header of an application, including the display of site name "Animals in Zoo":
import { globalConfig } from './config';
export default function Header({ children }) {
const heading =
globalConfig.siteName ? <h1>{globalConfig.siteName}</h1> : null;
return (
<div>
{heading}
{children}
</div>
);
}
<Header> component uses globalConfig.siteName to render site name inside a heading tag <h1>. When site name is not defined (i.e. null), the heading is not displayed.
The first to notice is that <Header> is impure. Given same value of children, the component returns different results because of globalConfig.siteName variations:
// globalConfig.siteName is 'Animals in Zoo'
<Header>Some content</Header>
// Renders:
<div>
<h1>Animals in Zoo</h1>
Some content
</div>
or
// globalConfig.siteName is `null`
<Header>Some content</Header>
// Renders:
<div>
Some content
</div>
The second problem is testing difficulties. To test how component handles null site name, we have to modify the global variable globalConfig.siteName = null manually:
import assert from 'assert';
import { shallow } from 'enzyme';
import { globalConfig } from './config';
import Header from './Header';
describe('<Header />', function() {
it('should render the heading', function() {
const wrapper = shallow(
<Header>Some content</Header>
);
assert(wrapper.contains(<h1>Animals in Zoo</h1>));
});
it('should not render the heading', function() {
// Modification of global variable:
globalConfig.siteName = null;
const wrapper = shallow(
<Header>Some content</Header>
);
assert(appWithHeading.find('h1').length === 0);
});
});
The modification of global variable globalConfig.siteName = null for sake of testing is hacky and uncomfortable. It happens because <Heading> has a tight dependency on globals.
To solve such impurities, rather than injecting globals into component’s scope, make the global variable an input of the component.
Let’s modify <Header> to accept one more prop siteName. Then wrap the component with defaultProps() higher order component (HOC) from recompose library. defaultProps() ensures fulfilling the missing props with default values:
<Header> becomes a pure functional component, and does not depend directly on globalConfig variable. The pure version is a named export: export function Header() {...}, which is useful for testing.
At the same time, the wrapped component with defaultProps({...}) sets globalConfig.siteName when siteName prop is missing. That’s the place where impure code is separated and isolated.
Let’s test the pure version of <Header> (remember to use a named import):
import assert from 'assert';
import { shallow } from 'enzyme';
import { Header } from './Header'; // Import the pure Header
describe('<Header />', function() {
it('should render the heading', function() {
const wrapper = shallow(
<Header siteName="Animals in Zoo">Some content</Header>
);
assert(wrapper.contains(<h1>Animals in Zoo</h1>));
});
it('should not render the heading', function() {
const wrapper = shallow(
<Header siteName={null}>Some content</Header>
);
assert(appWithHeading.find('h1').length === 0);
});
});
This is great. Unit testing of pure <Header> is straightforward. The test does one thing: verify whether the component renders the expected elements for a given input. No need to import, access or modify global variables, no side effects magic.
Well designed components are easy to test , which is visible in case of pure components.
6. Testable and Tested
“A tested component is verified whether it renders the expected output for a given input.“
“A testable component is easy to test. “
How to be sure that a component works as expected? We can say: “We manually verify how it works.”
If we plan to manually verify every component modification, sooner or later we’re going to skip this tedious task. Sooner or later small defects are going to make through.
That’s why is important to automate the verification of components: do unit testing. Unit tests make sure that our components are working correctly every time we make a modification.
Unit testing is not only about early bugs detection. Another important aspect is the ability to verify how well components are built architecturally.
The following statement I find especially important:
“A component that is untestable or hard to test is most likely badly designed. “
A component is hard to test because it has a lot of props, dependencies, requires mockups and access to global variables: that’s the sign of a bad design.
When the component has weak architectural design, it becomes untestable. When the component is untestable, we simply skip writing unit tests: as result it remains untested.
In conclusion, the reason why many applications are untested is incorrectly designed components. Even if we want to test such an application, we can’t.
Case study: testable means well designed
Let’s test 2 versions of <Controls> from the encapsulation point.
The following code tests <Controls> version that highly depends on the parent’s component structure:
<Controls> is complicated to test, since it relies on parent’s component implementation details.
The test scenario requires an additional component <Temp>, which emulates the parent. It permits to verify whether <Controls> modifies correctly parent’s state.
When <Controls> is independent of parent details, testing is easier. Let’s test the version with correct encapsulation:
Strong encapsulation leads to easy and straightforward testing. And contrary a component with incorrect encapsulation is difficult to test.
Testability is a practical criteria to identify how well our components are structured.
7. Meaningful
“A meaningful component is easy to understand what it does.“
It’s hard underestimate the importance of readable code. How many times did we stuck with obscured code? We see the characters, but don’t see the meaning.
Developer spends most of the time reading and understanding code, than actually writing it. Coding activity is 75% of time understanding code, 20% of time modifying existing code and only 5% writing new source.
A slight additional time spent on readability reduces the understanding time for teammates and ourself in the future. The naming practice becomes important when the application grows, because understanding efforts increase with volume of code.
Reading meaningful code is easy. Nevertheless writing meaningfully requires clean code practices and constant effort to express ourself clearly.
Component naming
Pascal case
Component name is a concatenation of one or more words (mostly nouns) in pascal case. For instance <DatePicker>, <GridItem>, <Application>, <Header>.
Specialization
The more specialized a component is, the more words its name might contain.
A component named <HeaderMenu> suggests a menu in the header. A name <SidebarMenuItem> indicates a menu item located in sidebar.
A component is easy to understand when the name meaningfully implies the intent. To make this happen, often we have to use verbose names. That’s fine: more verbose is better than less clear.
Suppose we navigate some project files and identify 2 components: <Authors> and <AuthorsList>. Based on names only, can we conclude the difference between them? Most likely not.
To get the details, you have to open <Authors> source file and explore the code. After doing that, we realize that <Authors> fetches authors list from server and renders <AuthorsList> presentational component.
A more specialized name instead of <Authors> doesn’t create this situation. Better names are <FetchAuthors>, <AuthorsContainer> or <AuthorsPage>.
One word – one concept
A word represents a concept. For example, a collection of rendered items concept is represented by list word.
Pick one word per concept, then keep the relation consistent within the whole application. The result is a predicable mental mapping of words – concepts that you get used to.
Readability suffers when the same concept is represented by many words. For example, we define a component that renders a list of orders <OrdersList>, and another that renders a list of expenses <ExpensesTable>.
The same concept of a collection of rendered items is represented by 2 different words: list and table. There’s no reason to use different words for the same concept. It adds confusion and breaks consistency in naming.
Name the components <OrdersList> and <ExpensesList> (using list word) or <OrdersTable> and <ExpensesTable> (using table word). Use whatever word we feel is better, just keep it consistent.
Code comments
Meaningful names for components, methods and variables are enough for making the code readable. Thus, comments are mostly redundant.
Case study: write self-explanatory code
Common misuse of comments is explanation of inexpressive and obscured naming. Let’s see such case:
// <Games> renders a list of games
// "data" prop contains a list of game data
function Games({ data }) {
// display up to 10 first games
const data1 = data.slice(0, 10);
// Map data1 to <Game> component
// "list" has an array of <Game> components
const list = data1.map(function(v) {
// "v" has game data
return <Game key={v.id} name={v.name} />;
});
return <ul>{list}</ul>;
}
<Games
data=[{ id: 1, name: 'Mario' }, { id: 2, name: 'Doom' }]
/>
Comments in the above example clarify obscured code. <Games>, data, data1, v, magic number 10 are inexpressive and difficult to understand.
If we refactor the component to have meaningful props and variables, the comments are easily omitted:
Don’t explain ourself with comments. Write code that is self-explanatory and self-documenting.
Expressiveness stairs
We distinguish 4 expressiveness stairs of a component. The lower we move on the stairs, the more effort we need to understand the component.
We can understand what the component does from:
Reading name and props;
Consulting documentation;
Exploring the code;
Asking the author.
If name and props give enough information to integrate the component into application, that’s a solid expressiveness. Try to keep this high quality level.
Some components have complex logic, and even a good name can’t give the necessary details. It’s fine to consult the documentation.
If documentation is missing or doesn’t answer all the questions, We have to explore the code. Not the best option because of additional time spent, but it’s acceptable.
When exploring code doesn’t help decipher the component, the next step is asking component’s author for details. That’s definitely bad naming, and avoid going down to this step. Better ask the author to refactor the code, or refactor it yourself.
8. Do continuous improvement
“We then said that rewriting is the essence of writing. We pointed out that professional writers rewrite their sentences over and over and then rewrite what they have rewritten.”
To produce a quality text, we have to rewrite our sentence multiple times. Read the written, simplify confusing places, use more synonyms, remove clutter words – then repeat until we have an enjoyable piece of text.
Interestingly that the same concept of rewriting applies to designing the components.
Sometimes it’s hardly possible to create the right components structure at the first attempt. It happens because:
A tight deadline doesn’t allow spending enough time on system design
The initially chosen approach appears to be wrong
We’ve just found an open source library that solves the problem better
or any other reason.
Finding the right organization is a series of trials and reviews. The more complex a component is, the more often it requires verification and refactoring.
Does the component implement a single responsibility, is it well encapsulated, is it enough tested? If we can’t answer a certain yes, determine the weak part (by comparing against presented above 9 attributes) and refactor the component.
Pragmatically, development is a never stopping process of reviewing previous decisions and making improvements.
9. Reliability
Taking care of component’s quality requires effort and periodical review. It worth investment, since correct components are the foundation of a well designed system. Such system is easy to maintain and grow with complexity that increases linearly.
As result, development is relatively convenient at any project stage.
On the other hand, as system size increases, we might forget to plan and regularly correct the structure, decrease coupling. A naive approach to just make it work.
But after an inevitable moment when system becomes enough tightly coupled, fulfilling new requirements becomes exponentially complicated. We can’t control the code, the weakness of system controls us. A bug fix creates new bugs, a code update requires a cascade of related modifications.
How does the sad story end? We might throw away the current system and rewrite the code from scratch, or most likely continue eating the cactus. We’ve eaten plenty of cactuses, we’ve probably too, and it’s not the best feeling.
The solution is simple, yet demanding: write reliable components.
That’s all about in this article.
Conclusion
In this article, We understood about nine useful criteria of A React Component in ReactJS. The presented 9 characteristics suggest the same idea from different angles :
“A reliable component implements one responsibility, hides its internal structure and provides effective set of props to control its behavior.”
Single responsibility and encapsulation are the base of a solid design. We conclude that :
Single responsibility suggests to create a component that implements only one responsibility and has one reason to change.
Encapsulated component hides its internal structure and implementation details, and defines props to control the behavior and output.
Compositionstructures big and authority components. Just split them into smaller chunks, then use composition to glue the whole back, making complex simple.
Reusable components are the result of a well designed system. Reuse the code whenever you can to avoid repetition.
Side effects like network requests or global variables make components depend on environment. Make them pure by returning same output for same prop values.
Meaningfulcomponent naming and expressive code are the key to readability. Your code must be understandable and welcoming to read.
Testing is not only an automated way of detecting bugs. If you find a component difficult to test, most likely it’s incorrectly designed.
A quality, extensible and maintainable, thus successful application stands on shoulders of reliable components.
What principles do you find useful when writing React components?
Thanks for reading ! I hope you enjoyed and learned about the nine useful criteria of A React Component in ReactJS. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, We will learn about Why design patterns are important and which one is the most popular frequently used design patterns in Swift. Swift is a programming language that allows developers to create versatile applications for multiple operating systems (though it is most frequently used to write applications for iOS). When we are new in programming languages, we don’t know which design patterns we should use with it and how to implement them.
Being able to use a relevant design pattern is a prerequisite to creating functional, high-quality, and secure applications.
We’ve decided to help by taking an in-depth look at the design patterns most widely used in Swift and showing different approaches to solving common problems in mobile development with them.
A famous quote about learning is :
“ Anyone who stops learning is old, whether at twenty or eighty. Anyone who keeps learning stays young. ”
So Let’s begin.
Design Patterns: What they are and why you should know them ?
A software design pattern is a solution to a particular problem you might face when designing an app’s architecture. But unlike out-of-the-box services or open-source libraries, we can’t simply paste a design pattern into our application because it isn’t a piece of code. Rather, it’s a general concept for how to solve a problem. A design pattern is a template that tells you how to write code, but it’s up to you to fit our code to this template.
Design patterns bring several benefits:
Tested solutions. We don’t need to waste time and reinvent the wheel trying to solve a particular software development problem, as design patterns already provide the best solution and tell us how to implement it.
Code unification. Design patterns provide us with typical solutions that have been tested for drawbacks and bugs, helping us make fewer mistakes when designing our app architecture.
Common vocabulary. Instead of providing in-depth explanations of how to solve this or that software development problem, we can simply say what design pattern we used and other developers will immediately understand what solutions we implemented.
Types of Software Design Patterns
Before we describe the most common architecture patterns in Swift, you should first learn the three types of software design patterns and how they differ:
Creational Design Patterns
Structural Design Patterns
Behavioral Design Patterns
1. Creational Design Patterns
Creational software design patterns deal with object creation mechanisms, which increase flexibility and reuse of existing code. They try to instantiate objects in a manner suitable for the particular situation. Here are several creational design patterns:
Factory Method
Abstract Factory
Builder
Singleton
Prototype
2. Structural Design Patterns
Structural design patterns aim to simplify the design by finding an easy way of realizing relationships between classes and objects. These patterns explain how to assemble objects and classes into larger structures while keeping these structures flexible and efficient.These are some structural architecture patterns:
Adapter
Bridge
Facade
Decorator
Composite
Flyweight
Proxy
3. Behavioral Design Patterns
Behaviour design patterns identify common communication patterns between entities and implement these patterns.
These patterns are concerned with algorithms and the assignment of responsibilities between objects. Behavioral design patterns include:
Chain of Responsibility
Template Method
Command
Iterator
Mediator
Memento
Observer
Strategy
State
Visitor
Most of these design patterns, however, are rarely used, and you’re likely to forget how they work before you even need them. So we’ve handpicked the five design patterns most frequently used in Swift to develop applications for iOS and other operating systems.
Most frequently used design patterns in Swift
We’re going to provide only the essential information about each software design pattern – namely, how it works from the technical point of view and when it should be applied. We’ll also give an illustrative example in the Swift programming language.
1. Builder
The Builder pattern is a creational design pattern that allows us to create complex objects from simple objects step by step. This design pattern helps us use the same code for creating different object views.
Imagine a complex object that requires incremental initialization of multiple fields and nested objects. Typically, the initialization code for such objects is hidden inside a mammoth constructor with dozens of parameters. Or even worse, it can be scattered all over the client code.
The Builder design pattern calls for separating the construction of an object from its own class. The construction of this object is instead assigned to special objects called builders and split into multiple steps. To create an object, you successively call builder methods. And you don’t need to go through all the steps – only those required for creating an object with a particular configuration.
You should apply the Builder design pattern :
when you want to avoid using a telescopic constructor (when a constructor has too many parameters, it gets difficult to read and manage);
when your code needs to create different views of a specific object;
when you need to compose complex objects.
Example:
Suppose you’re developing an iOS application for a restaurant and you need to implement ordering functionality. You can introduce two structures, Dish and Order, and with the help of the OrderBuilder object, you can compose orders with different sets of dishes.
// Design Patterns: Builder
import Foundation
// Models
enum DishCategory: Int {
case firstCourses, mainCourses, garnishes, drinks
}
struct Dish {
var name: String
var price: Float
}
struct OrderItem {
var dish: Dish
var count: Int
}
struct Order {
var firstCourses: [OrderItem] = []
var mainCourses: [OrderItem] = []
var garnishes: [OrderItem] = []
var drinks: [OrderItem] = []
var price: Float {
let items = firstCourses + mainCourses + garnishes + drinks
return items.reduce(Float(0), { $0 + $1.dish.price * Float($1.count) })
}
}
// Builder
class OrderBuilder {
private var order: Order?
func reset() {
order = Order()
}
func setFirstCourse(_ dish: Dish) {
set(dish, at: order?.firstCourses, withCategory: .firstCourses)
}
func setMainCourse(_ dish: Dish) {
set(dish, at: order?.mainCourses, withCategory: .mainCourses)
}
func setGarnish(_ dish: Dish) {
set(dish, at: order?.garnishes, withCategory: .garnishes)
}
func setDrink(_ dish: Dish) {
set(dish, at: order?.drinks, withCategory: .drinks)
}
func getResult() -> Order? {
return order ?? nil
}
private func set(_ dish: Dish, at orderCategory: [OrderItem]?, withCategory dishCategory: DishCategory) {
guard let orderCategory = orderCategory else {
return
}
var item: OrderItem! = orderCategory.filter( { $0.dish.name == dish.name } ).first
guard item == nil else {
item.count += 1
return
}
item = OrderItem(dish: dish, count: 1)
switch dishCategory {
case .firstCourses:
order?.firstCourses.append(item)
case .mainCourses:
order?.mainCourses.append(item)
case .garnishes:
order?.garnishes.append(item)
case .drinks:
order?.drinks.append(item)
}
}
}
// Usage
let steak = Dish(name: "Steak", price: 2.30)
let chips = Dish(name: "Chips", price: 1.20)
let coffee = Dish(name: "Coffee", price: 0.80)
let builder = OrderBuilder()
builder.reset()
builder.setMainCourse(steak)
builder.setGarnish(chips)
builder.setDrink(coffee)
let order = builder.getResult()
order?.price
// Result:
// 4.30
2. Adapter
Adapter is a structural design pattern that allows objects with incompatible interfaces to work together. In other words, it transforms the interface of an object to adapt it to a different object.
An adapter wraps an object, therefore concealing it completely from another object. For example, you could wrap an object that handles meters with an adapter that converts data into feet.
You should use the Adapter design pattern:
when you want to use a third-party class but its interface doesn’t match the rest of your application’s code;
when you need to use several existing subclasses but they lack particular functionality and, on top of that, you can’t extend the superclass.
Example:
Suppose you want to implement a calendar and event management functionality in your iOS application. To do this, you should integrate the EventKit framework and adapt the Event model from the framework to the model in your application. An Adapter can wrap the model of the framework and make it compatible with the model in your application.
// Design Patterns: Adapter
import EventKit
// Models
protocol Event: class {
var title: String { get }
var startDate: String { get }
var endDate: String { get }
}
extension Event {
var description: String {
return "Name: \(title)\nEvent start: \(startDate)\nEvent end: \(endDate)"
}
}
class LocalEvent: Event {
var title: String
var startDate: String
var endDate: String
init(title: String, startDate: String, endDate: String) {
self.title = title
self.startDate = startDate
self.endDate = endDate
}
}
// Adapter
class EKEventAdapter: Event {
private var event: EKEvent
private lazy var dateFormatter: DateFormatter = {
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "MM-dd-yyyy HH:mm"
return dateFormatter
}()
var title: String {
return event.title
}
var startDate: String {
return dateFormatter.string(from: event.startDate)
}
var endDate: String {
return dateFormatter.string(from: event.endDate)
}
init(event: EKEvent) {
self.event = event
}
}
// Usage
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "MM/dd/yyyy HH:mm"
let eventStore = EKEventStore()
let event = EKEvent(eventStore: eventStore)
event.title = "Design Pattern Meetup"
event.startDate = dateFormatter.date(from: "06/29/2018 18:00")
event.endDate = dateFormatter.date(from: "06/29/2018 19:30")
let adapter = EKEventAdapter(event: event)
adapter.description
// Result:
// Name: Design Pattern Meetup
// Event start: 06-29-2018 18:00
// Event end: 06-29-2018 19:30
3. Decorator
The Decorator pattern is a structural design pattern that allows you to dynamically attach new functionalities to an object by wrapping them in useful wrappers.
No wonder this design pattern is also called the Wrapper design pattern. This name describes more precisely the core idea behind this pattern: you place a target object inside another wrapper object that triggers the basic behavior of the target object and adds its own behavior to the result.
Both objects share the same interface, so it doesn’t matter for a user which of the objects they interact with − clean or wrapped. You can use several wrappers simultaneously and get the combined behavior of all these wrappers.
You should use the Decorator design pattern :
when you want to add responsibilities to objects dynamically and conceal those objects from the code that uses them;
when it’s impossible to extend responsibilities of an object through inheritance.
Example :
Imagine you need to implement data management in your iOS application. You could create two decorators: EncryptionDecorator for encrypting and decrypting data and EncodingDecorator for encoding and decoding.
// Design Patterns: Decorator
import Foundation
// Helpers (may be not include in blog post)
func encryptString(_ string: String, with encryptionKey: String) -> String {
let stringBytes = [UInt8](string.utf8)
let keyBytes = [UInt8](encryptionKey.utf8)
var encryptedBytes: [UInt8] = []
for stringByte in stringBytes.enumerated() {
encryptedBytes.append(stringByte.element ^ keyBytes[stringByte.offset % encryptionKey.count])
}
return String(bytes: encryptedBytes, encoding: .utf8)!
}
func decryptString(_ string: String, with encryptionKey: String) -> String {
let stringBytes = [UInt8](string.utf8)
let keyBytes = [UInt8](encryptionKey.utf8)
var decryptedBytes: [UInt8] = []
for stringByte in stringBytes.enumerated() {
decryptedBytes.append(stringByte.element ^ keyBytes[stringByte.offset % encryptionKey.count])
}
return String(bytes: decryptedBytes, encoding: .utf8)!
}
// Services
protocol DataSource: class {
func writeData(_ data: Any)
func readData() -> Any
}
class UserDefaultsDataSource: DataSource {
private let userDefaultsKey: String
init(userDefaultsKey: String) {
self.userDefaultsKey = userDefaultsKey
}
func writeData(_ data: Any) {
UserDefaults.standard.set(data, forKey: userDefaultsKey)
}
func readData() -> Any {
return UserDefaults.standard.value(forKey: userDefaultsKey)!
}
}
// Decorators
class DataSourceDecorator: DataSource {
let wrappee: DataSource
init(wrappee: DataSource) {
self.wrappee = wrappee
}
func writeData(_ data: Any) {
wrappee.writeData(data)
}
func readData() -> Any {
return wrappee.readData()
}
}
class EncodingDecorator: DataSourceDecorator {
private let encoding: String.Encoding
init(wrappee: DataSource, encoding: String.Encoding) {
self.encoding = encoding
super.init(wrappee: wrappee)
}
override func writeData(_ data: Any) {
let stringData = (data as! String).data(using: encoding)!
wrappee.writeData(stringData)
}
override func readData() -> Any {
let data = wrappee.readData() as! Data
return String(data: data, encoding: encoding)!
}
}
class EncryptionDecorator: DataSourceDecorator {
private let encryptionKey: String
init(wrappee: DataSource, encryptionKey: String) {
self.encryptionKey = encryptionKey
super.init(wrappee: wrappee)
}
override func writeData(_ data: Any) {
let encryptedString = encryptString(data as! String, with: encryptionKey)
wrappee.writeData(encryptedString)
}
override func readData() -> Any {
let encryptedString = wrappee.readData() as! String
return decryptString(encryptedString, with: encryptionKey)
}
}
// Usage
var source: DataSource = UserDefaultsDataSource(userDefaultsKey: "decorator")
source = EncodingDecorator(wrappee: source, encoding: .utf8)
source = EncryptionDecorator(wrappee: source, encryptionKey: "secret")
source.writeData("Design Patterns")
source.readData() as! String
// Result:
// Design Patterns
4. Facade
Facade is a structural design pattern that provides a simplified interface to a library, a framework, or any other complex set of classes.
Imagine that your code has to deal with multiple objects of a complex library or framework. You need to initialize all these objects, keep track of the right order of dependencies, and so on. As a result, the business logic of your classes gets intertwined with implementation details of other classes. Such code is difficult to read and maintain.
The Facade pattern provides a simple interface for working with complex subsystems containing lots of classes. The Facade pattern offers a simplified interface with limited functionality that you can extend by using a complex subsystem directly. This simplified interface provides only the features a client needs while concealing all others.
You should use the Facade design pattern:
when you want to provide a simple or unified interface to a complex subsystem;
when you need to decompose a subsystem into separate layers.
Example:
Lots of modern mobile applications support audio recording and playback, so let’s suppose you need to implement this functionality. You could use the Facade pattern to hide the implementation of services responsible for the file system (FileService), audio sessions (AudioSessionService), audio recording (RecorderService), and audio playback (PlayerService). The Facade provides a simplified interface for this rather complex system of classes.
The Template Method pattern is a behavioral design pattern that defines a skeleton for an algorithm and delegates responsibility for some steps to subclasses. This pattern allows subclasses to redefine certain steps of an algorithm without changing its overall structure.
This design pattern splits an algorithm into a sequence of steps, describes these steps in separate methods, and calls them consecutively with the help of a single template method.
You should use the Template Method design pattern:
when subclasses need to extend a basic algorithm without modifying its structure;
when you have several classes responsible for quite similar actions (meaning that whenever you modify one class, you need to change the other classes).
Example:
Suppose you’re working on an iOS app that must be able to take and save pictures. Therefore, your application needs to get permissions to use the iPhone (or iPad) camera and image gallery. To do this, you can use the PermissionService base class that has a specific algorithm.
To get permission to use the camera and gallery, you can create two subclasses, CameraPermissionService and PhotoPermissionService, that redefine certain steps of the algorithm while keeping other steps the same.
// Design Patterns: Template Method
import AVFoundation
import Photos
// Services
typealias AuthorizationCompletion = (status: Bool, message: String)
class PermissionService: NSObject {
private var message: String = ""
func authorize(_ completion: @escaping (AuthorizationCompletion) -> Void) {
let status = checkStatus()
guard !status else {
complete(with: status, completion)
return
}
requestAuthorization { [weak self] status in
self?.complete(with: status, completion)
}
}
func checkStatus() -> Bool {
return false
}
func requestAuthorization(_ completion: @escaping (Bool) -> Void) {
completion(false)
}
func formMessage(with status: Bool) {
let messagePrefix = status ? "You have access to " : "You haven't access to "
let nameOfCurrentPermissionService = String(describing: type(of: self))
let nameOfBasePermissionService = String(describing: type(of: PermissionService.self))
let messageSuffix = nameOfCurrentPermissionService.components(separatedBy: nameOfBasePermissionService).first!
message = messagePrefix + messageSuffix
}
private func complete(with status: Bool, _ completion: @escaping (AuthorizationCompletion) -> Void) {
formMessage(with: status)
let result = (status: status, message: message)
completion(result)
}
}
class CameraPermissionService: PermissionService {
override func checkStatus() -> Bool {
let status = AVCaptureDevice.authorizationStatus(for: .video).rawValue
return status == AVAuthorizationStatus.authorized.rawValue
}
override func requestAuthorization(_ completion: @escaping (Bool) -> Void) {
AVCaptureDevice.requestAccess(for: .video) { status in
completion(status)
}
}
}
class PhotoPermissionService: PermissionService {
override func checkStatus() -> Bool {
let status = PHPhotoLibrary.authorizationStatus().rawValue
return status == PHAuthorizationStatus.authorized.rawValue
}
override func requestAuthorization(_ completion: @escaping (Bool) -> Void) {
PHPhotoLibrary.requestAuthorization { status in
completion(status.rawValue == PHAuthorizationStatus.authorized.rawValue)
}
}
}
// Usage
let permissionServices = [CameraPermissionService(), PhotoPermissionService()]
for permissionService in permissionServices {
permissionService.authorize { (_, message) in
print(message)
}
}
// Result:
// You have access to Camera
// You have access to Photo
That’s all about in this article.
Conclusion
In this article, We understood about the five design patterns most frequently used in Swift. The ability to pick a design pattern in Swift that’s relevant for building a particular project allows you to build fully functional and secure applications that are easy to maintain and upgrade. You should certainly have design patterns in your skillset, as they not only simplify software development but also optimize the whole process and ensure high code quality.
Thanks for reading ! I hope you enjoyed and learned about the most frequently used Design Patterns in Swift. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, We will learn about most popular design principles SOLID in Swift. We will see that how SOLID is applicable for Swift. Now a days , a Maintainable and Reusable component is just a dream. Maybe not. SOLID principles, may be the way.
A famous quote about learning is :
” Change is the end result of all true learning.“
So Let’s begin.
Origin of the acronym SOLID
SOLID is an acronym named by Robert C. Martin (Uncle Bob). It represents 5 principles of object-oriented programming :
Single responsibility Principle
Open/Closed Principle
Liskov Substitution Principle
Interface Segregation Principle
Dependency Inversion Principle
If we apply these five principles:
We will have flexible code, which we can easily change and that will be both reusable and maintainable.
The software developed will be robust, stable and scalable (we can easily add new features).
Together with the use of the Design Patterns, it will allow us to create software that is highly cohesive (that is, the elements of the system are closely related) and loosely coupled (the degree of dependence between elements is low).
So, SOLID can solve the main problems of a bad architecture:
Fragility: A change may break unexpected parts—it is very difficult to detect if you don’t have a good test coverage.
Immobility: A component is difficult to reuse in another project—or in multiple places of the same project—because it has too many coupled dependencies.
Rigidity: A change requires a lot of efforts because affects several parts of the project.
Of course, as Uncle Bob pointed out in a his article, these are not strict rules, but just guidelines to improve the quality of your architecture.
” Principles will not turn a bad programmer into a good programmer. Principles have to be applied with judgement. If they are applied by rote it is just as bad as if they are not applied at all. “
Principles
The Single Responsibility Principle (SRP)
According to this principle, a class should have a reason, and only one, to change. That is, a class should only have one responsibility.
Now let’s describe Single Responsibility Principle says :
“THERE SHOULD NEVER BE MORE THAN ONE REASON FOR A CLASS TO CHANGE.“
Every time you create/change a class, you should ask yourself: How many responsibilities does this class have?
Let’s take a look into Swifty communication program.
import Foundation
class InterPlanetMessageReceiver {
func receiveMessage() {
print("Received the Message!")
}
func displayMessageOnGUI() {
print("Displaying Message on Screen!")
}
}
Now let’s understand what is Single Responsibility Principle (SRP) and how the above program doesn’t obey it.
SRP says, “Just because you can implement all the features in a single device, you shouldn’t”.
In Object Oriented terms it means: There should never be more than one reason for a class to change. It doesn’t mean you can’t have multiple methods but the only condition is that they should have one single purpose.
Why? Because it adds a lot of manageability problems for you in the long run.
Here, the InterPlanetMessageReceiver class does the following:
It receives the message.
It renders it on UI.
And, two applications are using this InterPlanetMessageReceiver class:
A messaging application uses this class to receive the message
A graphical application uses this class to draw the message on the UI
Do you think it is violating the SRP?
YES, The InterPlanetMessageReceiver class is actually performing two different things. First, it handles the messaging, and second, displaying the message on GUI. This causes some interesting problems:
Swifty must include the GUI into the messaging application and also while deploying the messaging application, we must include the GUI library.
A change to the InterPlanetMessageReceiver class for the graphical application may lead to a change, build, and test for the messaging application, and vice-versa.
Swifty got frustrated with the amount of changes it required. He thought it would be a minute job but now he has already spent hours on it. So he decided do make a change into his program and fix this dependency.
This is what Swifty came up with
import Foundation
// Handles received message
class InterPlanetMessageReceiver {
func receive() {
print("Received the Message!")
}
}
// Handles the display part
class InterPlanetMessageDisplay {
func displayMessageOnGUI() {
print("Displaying Message on Screen!")
}
}
Here’s how Swifty explained this:
InterPlanetMessageReceiver class will be used by the messaging application, and the InterPlanetMessageDisplay class will be used by the graphical application. We could even separate the classes into two separate files, and that will allow us not to touch the other in case a change is needed to be implemented in one.
Finally, Swifty noted down :Why we need SRP?
Each responsibility is an agent of change.
Code becomes coupled if classes have more than one responsibility.
Open/Closed Principle
According to this principle, we must be able to extend the a class without changing its behaviour. This is achieved by abstraction.
Now let’s describe Open/Closed Principle says :
” SOFTWARE ENTITIES (CLASSES, MODULES, FUNCTIONS, ETC.) SHOULD BE OPEN FOR EXTENSION, BUT CLOSED FOR MODIFICATION. “
If you want to create a class easy to maintain, it must have two important characteristics:
Open for extension: You should be able to extend or change the behaviours of a class without efforts.
Closed for modification: You must extend a class without changing the implementation.
Let’s see our swifty example. Swifty was quite happy with these change and later he celebrated it with a drink in Swiftzen’s best pub and there his eyes fell upon an artifact hanging on the front wall and he found all the symbols he received in the message. Quickly, he opened his diary and completed deciphering all those shapes.
Next day when he returned back, he thought why not fix the DrawGraphic class which draws only circle shape, to include the rest of the shapes and display the message correctly.
// This is the DrawGraphic
class DrawGraphic {
func drawShape() {
print("Circle is drawn!")
}
}
// Updated Class code
enum Shape {
case circle
case rectangle
case square
case triangle
case pentagon
case semicircle
}
class circle {
}
// This is the DrawGraphic
class DrawGraphic {
func drawShape(shape: Shape) {
switch shape {
case .circle:
print("Circle is drawn")
case .rectangle:
print("Rectangle is drawn")
case square:
print("Square is drawn")
case triangle:
print("Triangle is drawn")
case pentagon:
print("Pentagon is drawn")
case semicircle:
print("Semicircle is drawn")
default:
print("Shape not provided")
}
}
}
Swifty was not happy with these changes, what if in future a new shape shows up, after all he saw in the artifacts that there were around 123 shapes. This class will become one fat class. Also, DrawGraphics class is used by other applications and so they also have to adapt to this change. it was nightmare for Swifty.
Open Closed Principle solves nightmare for Swifty. At the most basic level, this means, you should be able to extend a class behavior without modifying it. It’s just like I should be able to put on a dress without doing any change to my body. Imagine what would happen if for every dress I have to change my body.
After hours of thinking, Swifty came up with below implementation of DrawGraphic class.
protocol Draw {
func draw()
}
class Circle: Draw {
func draw() {
print("Circle is drawn!")
}
}
class Rectangle: Draw {
func draw() {
print("Rectangle is drawn!")
}
}
class DrawGraphic {
func drawShape(shape: Draw) {
shape.draw()
}
}
let circle = Circle()
let rectangle = Rectangle()
let drawGraphic = DrawGraphic()
drawGraphic.drawShape(shape: circle) // Circle is drawn!
drawGraphic.drawShape(shape: rectangle) // Rectangle is drawn!
Since the DrawGraphic is responsible for drawing all the shapes, and because the shape design is unique to each individual shape, it seems only logical to move the drawing for each shape into its respective class.
That means the DrawGraphic still have to know about all the shapes, right? Because how does it know that the object it’s iterating over has a draw method? Sure, this could be solved with having each of the shape classes inherit from a protocol: the Draw protocol (this can be an abstract class too).
Circle and Rectangle classes holds a reference to the protocol, and the concrete DrawGraphic class implements the protocol Draw class. So, if for any reason the DrawGraphic implementation is changed, the Circle and Rectangle classes are not likely to require any change or vice-versa.
Liskov Subsitution Principle
This principle, introduced by Barbara Liskov in 1987, states that in a program any class should be able to be replaced by one of its subclasses without affecting its functioning.
Now let’s describe Liskov Substitution Principle says :
” FUNCTIONS THAT USE POINTERS OR REFERENCES TO BASE CLASSES MUST BE ABLE TO USE OBJECTS OF DERIVED CLASSES WITHOUT KNOWING IT.“
Inheritance may be dangerous and you should use composition over inheritance to avoid a messy codebase. Even more if you use inheritance in an improper way.
This principle can help you to use inheritance without messing it up.
Let’s see our swifty example. Swifty was implementing the SenderOrigin class to know whether the sender is from a Planet or not.
The Sender class looked something like this
Class Planet {
func orbitAroundSun() {
}
}
class Earth: Planet {
func description() {
print("It is Earth!")
}
}
class Pluto: Planet {
func description() {
print("It is Pluto!")
}
}
class Sender {
func senderOrigin(planet: Planet) {
planet.description()
}
}
In the class design, Pluto should not inherit the Planet class because it is a dwarf planet, and there should be a separate class for Planet that has not cleared the neighborhood around its orbit and Pluto should inherit that.
So the principle says that Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program.
Swifty whispered it is the polymorphism. Yes it is. “Inheritance” is usually described as an “is a” relationship. If a “Planet” is a “Dwarf”, then the “Planet” class should inherit the “Dwarf” class. Such “Is a” relationships are very important in class designs, but it’s easy to get carried away and end up with a wrong design and a bad inheritance.
The “Liskov’s Substitution Principle” is just a way of ensuring that inheritance is used correctly.
In the above case, both Earth and Pluto can orbit around the Sun but Pluto is not a planet. It has not cleared the neighborhood around its orbit. Swifty understood this and changed the program.
class Planet {
func oribitAroundSun() {
print("This planet Orbit around Sun!")
}
}
class Earth: Planet {
func description() {
print("Earth")
}
}
class DwarfPlanet: Planet {
func notClearedNeighbourhoodOrbit() {
}
}
class Pluto: DwarfPlanet {
func description() {
print("Pluto")
}
}
class Sender {
func senderOrigin(from: Planet) {
from.description()
}
}
let pluto = Pluto()
let earth = Earth()
let sender = Sender()
sender.senderOrigin(from: pluto) // Pluto
sender.senderOrigin(from: earth) // Earth
Here, Pluto inherited the planet but added the notClearedNeigbourhood method which distinguishes a dwarf and regular planet.
If LSP is not maintained, class hierarchies would be a mess, and if a subclass instance was passed as parameter to methods, strange behavior might occur.
If LSP is not maintained, unit tests for the base classes would never succeed for the subclass.
Swifty can design objects and apply LSP as a verification tool to test the hierarchy whether inheritance is properly done.
Interface Segregation Principle
The Principle of segregation of the interface indicates that it is better to have different interfaces (protocols) that are specific to each client, than to have a general interface. In addition, it indicates that a client would not have to implement methods that he does not use.
Now let’s describe Interface Segragation Principle says :
” CLIENTS SHOULD NOT BE FORCED TO DEPEND UPON INTERFACES THAT THEY DO NOT USE.“
This principle introduces one of the problems of object-oriented programming: the fat interface.
An interface is called “fat” when has too many members/methods, which are not cohesive and contains more information than we really want. This problem can affect both classes and protocols.
Let’s continue our swifty example. Swifty was quite astonished with the improvement in his program. All the changes were making more sense. Now, it was time to share this code with different planet. Swiftzen 50% GDP was dependent on selling softwares and many planet has requested and signed MOU for the Inter Planet communication system.
Swifty was ready to sell the program and but he was not satisfied with current client interface. Let’s us look into it.
Now for anyone who want to use interPlanetCommunication, he has to implement all the five methods even-though they might not required.
So the principle says that Many client-specific interfaces are better than one general purpose interface. The principle ensures that Interfaces are developed so that each of them have their own responsibility and thus they are specific, easily understandable, and re-usable.
Swifty quickly made changes to his program interface:
“HIGH LEVEL MODULES SHOULD NOT DEPEND UPON LOW LEVEL MODULES. BOTH SHOULD DEPEND UPON ABSTRACTIONS.”
“ABSTRACTIONS SHOULD NOT DEPEND UPON DETAILS. DETAILS SHOULD DEPEND UPON ABSTRACTIONS.”
This principle tries to reduce the dependencies between modules, and thus achieve a lower coupling between classes.
This principle is the right one to follow if you believe in reusable components.
DIP is very similar to Open-Closed Principle: the approach to use, to have a clean architecture, is decoupling the dependencies. You can achieve it thanks to abstract layers.
Let’s continue our swifty example. Before finally shipping the program to all the clients, Swifty was trying to fix the password reminder class which looks like this.
class PasswordReminder {
func connectToDatabase(db: SwiftZenDB) {
print("Database Connected to SwiftzenDB")
}
func sendReminder() {
print("Send Reminder")
}
}
PasswordReminder class is dependent on a lower level module i.e. database connection. Now, let suppose that you want to change the database connection from Swiftzen to Objective-Czen, you will have to edit the PasswordReminder class.
Finally the last principle states that Entities must depend on abstractions not on concretions.
The DBConnection protocol has a connection method and the SwiftzenDB class implements this protocol, also instead of directly type-hinting SwiftzenDB class in PasswordReminder, Swifty instead type-hint the protocol and no matter the type of database your application uses, the PasswordReminder class can easily connect to the database without any problems and OCP is not violated.
The point is rather than directly depending on the SwiftzenDB, the passwordReminder depends on the abstraction of some specification of Database so that if any the Database conforms to the abstraction, it can be connection with the PasswordReminder and it will work.
That’s all about in this article.
Conclusion
In this article, We understood about SOLID principles in Swift. We learnt that how SOLID is application for Swift. If we follow SOLID principles judiciously, we can increase the quality of our code. Moreover, our components can become more maintainable and reusable.
The mastering of these principles is not the last step to become a perfect developer, actually, it’s just the beginning. We will have to deal with different problems in our projects, understand the best approach and, finally, check if we are breaking some principles.
We have 3 enemies to defeat: Fragility, Immobility and Rigidity. SOLID principles are our weapons. We tried to explain the SOLID concepts in Swift easy way with examples.
Thanks for reading ! I hope you enjoyed and learned about SOLID Principles in Swift. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, we will talk about the ins and outs of Jest to help you get started with testing. we will learn more about the vocabulary associated with Jest testing, like mocks and spies. Also, we’ll cover some of the basics of Jest testing, like using describe blocks and the keywords it and expect. Finally, we’ll take a look at snapshot testing and why it’s particularly useful for front-end testing.
A famous quote about learning is :
” The more I live, the more I learn. The more I learn, the more I realizes, the less I know.“
So Let’s start.
What Is Jest?
Jest was created by Facebook specifically for testing React applications. It’s one of the most popular ways of testing React components. Since its introduction, the tool has gained a lot of popularity. This popularity has led to the use of Jest for testing both JavaScript front-end and back-end applications. Many large companies—including Twitter, Instagram, Pinterest, and Airbnb—use Jest for React testing.
Jest itself is actually not a library but a framework. There’s even a CLI tool that you can use from the command line. To give an example, the CLI tool allows you to run only specific tests that match a pattern. Besides that, it hosts much more functionality, which you can find in the CLI documentation.
Jest offers a test runner, assertion library, CLI tool, and great support for different mocking techniques. All of this makes it a framework and not just a library.
Jest Characteristics
From the JestJS.io website, we can find four main characteristics of Jest:
Zero config: “Jest aims to work out of the box, config free, on most JavaScript projects.” This means you can simply install Jest as a dependency for your project, and with no or minimal adjustments, you can start writing your first test.
Isolated: Isolation is a very important property when running tests. It ensures that different tests don’t influence each other’s results. For Jest, tests are executed in parallel, each running in their own process. This means they can’t interfere with other tests, and Jest acts as the orchestrator that collects the results from all the test processes.
Snapshots: Snapshots are a key feature for front-end testing because they allow you to verify the integrity of large objects. This means you don’t have to write large tests full of assertions to check if every property is present on an object and has the right type. You can simply create a snapshot and Jest will do the magic. Later, we’ll discuss in detail how snapshot testing works.
Rich API: Jest is known for having a rich API offering a lot of specific assertion types for very specific needs. Besides that, its great documentation should help you get started quickly.
Jest Vocabulary
Mock
From the Jest documentation, we can find the following description for a Jest mock:
“Mock functions make it easy to test the links between code by erasing the actual implementation of a function, capturing calls to the function (and the parameters passed in those calls).”
In addition, we can use a mock to return whatever we want it to return. This is very useful to test all the paths in our logic because we can control if a function returns a correct value, wrong value, or even throws an error.
In short, a mock can be created by assigning the following snippet of code to a function or dependency:
jest.fn()
Here’s an example of a simple mock, where we just check whether a mock has been called. We mock mockFn and call it. Thereafter, we check if the mock has been called:
A spy has a slightly different behavior but is still comparable with a mock. Again, from the official docs, we read,
“Creates a mock function similar to jest.fn() but also tracks calls to object[methodName]. Returns a Jest mock function.”
What this means is that the function acts as it normally would—however, all calls are being tracked. This allows you to verify if a function has been called the right number of times and held the right input parameters.
Below, you’ll find an example where we want to check if the play method of a video returns the correct result but also gets called with the right parameters. We spy on the play method of the video object. Next, we call the play method and check if the spy has been called and if the returned result is correct. Pretty straightforward! In the end, we must call the mockRestore method to reset a mock to its original implementation.
Let’s take a look at some basics on writing tests with Jest.
Describe Blocks
A describe block is used for organizing test cases in logical groups of tests. For example, we want to group all the tests for a specific class. We can further nest new describe blocks in an existing describe block. To continue with the example, you can add a describe block that encapsulates all the tests for a specific function of this class.
“It” or “Test“ Tests
We use the test keyword to start a new test case definition. The it keyword is an alias for the test keyword. Personally, I like to use it, which allows for more natural language flow of writing tests. For example:
Next, let’s look at the matchers Jest exposes. A matcher is used for creating assertions in combination with the expect keyword. We want to compare the output of our test with a value we expect the function to return.
Again, let’s look at a simple example where we want to check if an instance of a class is the correct class we expect. We place the test value in the expect keyword and call the exposed matcher function toBeInstanceOf(<class>) to compare the values. The test results in the following code:
it('should be instance of Car', () => {
expect(newTruck()).toBeInstanceOf(Car);
});
The complete list of exposed matchers can be found in the Jest API reference.
Snapshot Testing for React Front Ends
At last, the Jest documentation suggests using snapshot tests to detect UI changes. As I mentioned earlier, snapshot testing can also be applied for checking larger objects, or even the JSON response for API endpoints.
Let’s take a look at an example for React where we simply want to create a snapshot for a link object. The snapshot itself will be stored with the tests and should be committed alongside code changes.
If the link object changes, this test will fail in the future. If the changes to the UI elements are correct, you should update the snapshots by storing the results in the snapshot file. You can automatically update snapshots using the Jest CLI tool by adding a “-u” flag when executing the tests.
Conclusion
In this article, We understood about the Jest Framework Concepts . We learnt more about the vocabulary associated with Jest testing, like mocks and spies. Also, we have covered some of the basics of Jest testing, like using describe blocks and the keywords it and expect with snapshot testing and why it’s particularly useful for front-end testing.
Thanks for reading ! I hope you enjoyed and learned about the Jest unit testing framework concepts . Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, We will learn about most popular basic fundamental Variables concepts Lets, Var and Const from JavaScript. Let, Var, and Const are the various ways that JavaScript provides for declaration of JavaScript Variables. Varis an old way of declaring variables. Whereas, Let& Constcame into the picture from the ES6 version. Before starting the discussion about JavaScript let Vs var Vs const, let’s understand what ES is?
ES stands for Ecma Script, which is a scripting language specification specified by ECMA international. It standardized various implementations of JavaScript.
In this article, we will discuss differences of following ways of declaring a variable concerning their scope, use, and hoisting:
Var keyword: What, How, and Where?
Let keyword: What, How, and Where?
Const keyword: What, How, and Where?
A famous quote about learning is :
” Education is not the filling of a pail, but the lighting of a fire. “
Var keyword: What, How, and Where?
What is a “var” keyword ?
The “var” keyword is one of the ways using which we can declare a variable in JavaScript. Before the advent of ES6, var was the only way to declare variables. In other words, out of JavaScript letVs varVs const, var was the sole way to declare variables. Its syntax looks like below:
Syntax:
var variable = value;
Scope of var:
The scope specifies where we can access or use the variables. When we declare a variable outside a function, its scope is global. In other words, it means that the variables whose declaration happens with “var” outside a function (even within a block) are accessible in the whole window. Whereas, when the declaration of a variable occurs inside a function, it is available and accessible only within that function.
It is illustrated with the help of following code snippet:
<html>
<body> Demonstrating var scopes in javascript:</br>
<script type="text/javascript">
var globalVariable = 5;
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
if (globalVariable == 5)
{
globalVariable = 10;
var localBlockVariable = 15;
document.write("The value of global variable inside block is: ", globalVariable, "</br>");
}
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
document.write("The value of block local variable outside block is: ", localBlockVariable, "</br>");
function updateVariables() {
globalVariable = 20;
localBlockVariable = 25;
var localFunctionVariable = 30;
document.write("The value of global variable inside function is: ", globalVariable, "</br>");
document.write("The value of block local variable inside function is: ", localBlockVariable, "</br>");
}
updateVariables();
document.write("The value of global variable outside function is: ", globalVariable, "</br>");
document.write("The value of block local variable outside function is: ", localBlockVariable, "</br>");
// This following statement will give error, as the local function variable can't be accessed outside
// document.write("The value of function local variable outside function is: ", localFunctionVariable, "</br>");
</script>
</body>
</html>
As is evident in the above code snippet, the variables declared in global and block scope are accessible in the whole window. In contrast, variables declared inside in the function can just be accessed within that function only.
Re-declaration of “var” variables:
The variables declared using var can be re-declared within the same scope also, and it will not raise any error.
Let’s understand it with the help of following code snippet:
<html>
<body> Demonstrating var scopes in javascript:</br>
<script type="text/javascript">
var globalVariable = 5;
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
if (globalVariable == 5)
{
var globalVariable = 10; // Re-declare in block
document.write("The value of global variable inside block is: ", globalVariable, "</br>");
}
var globalVariable = 15; // Re-declare in same scope
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
</script>
</body>
</html>
As is clear from the above code snippet, the same variable “globalVariable” has been declared multiple times without any error.
Hoisting of var:
Hoisting is a JavaScript mechanism where variables and function declarations move to the top of their scope before code execution.
For Example, Let see below code snippet :
document.write(variable1);
var variable1 = "Hello World"
Here, JavaScript will interpret it as:
var variable1;
document.write(variable1); // variable1 will be undefined
variable1 = "Hello World"
So varvariables hoist to the top of its scope and initialize with a value of undefined. If we access it before its declaration, the value of a variable declared using var is printed as “undefined,” .
Let keyword: What, How, and Where?
What is the “let” keyword?
In ES 2015 release, ES released one more keyword for declaration of variables, which is known as the “let“. Its syntax looks like below:
Syntax:
let variable = value;
Scope of let:
let is block scoped. A block is a chunk of code bounded by {}. Moreover, a variable that one declares in a block with the “let” is only available for use within that block only.
Let’s try to understand the same with the help of following code snippet:
<html>
<body> Demonstrating let scopes in javascript:</br>
<script type="text/javascript">
let globalVariable = 5;
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
if (globalVariable == 5)
{
globalVariable = 10;
let localBlockVariable = 15;
document.write("The value of global variable inside block is: ", globalVariable, "</br>");
}
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
// This following statement will give error, as the local block variable can't be accessed outside
// document.write("The value of block local variable outside block is: ", localBlockVariable, "</br>");
function updateVariables() {
globalVariable = 20;
localBlockVariable = 25; // This will raise error
let localFunctionVariable = 30;
document.write("The value of global variable inside function is: ", globalVariable, "</br>");
// This following statement will give error, as the local block variable can't be accessed outside
// document.write("The value of block local variable inside function is: ", localBlockVariable, "</br>");
}
updateVariables();
document.write("The value of global variable outside function is: ", globalVariable, "</br>");
// This following statement will give error, as the local block variable can't be accessed outside
// document.write("The value of block local variable outside function is: ", localBlockVariable, "</br>");
// This following statement will give error, as the local function variable can't be accessed outside
// document.write("The value of function local variable outside function is: ", localFunctionVariable, "</br>");
</script>
</body>
</html>
The above code snippet clearly understands that the variables declared using “let” are block-scoped and can’t access outside the block in which the declaration happens.
Re-declaration of “let” variables:
The variable declared using let can’t be re-declared.
It can be demonstrated easily with the help of following code snippet:
<html>
<body> Demonstrating let re-declare in javascript:</br>
<script type="text/javascript">
let globalVariable = 5;
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
if (globalVariable == 5)
{
// The following statement will raise an error
// let globalVariable = 10;
document.write("The value of global variable inside block is: ", globalVariable, "</br>");
}
// The following statement will raise an error
// let globalVariable = 15; // Re-declare in same scope
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
</script>
</body>
</html>
The above code snippet clearly shows that the variables using let can’t be re-declared.
Hoisting of let:
Just like var, let declarations hoist to the top. But, unlike var, which initializes as undefined, the let keyword does not initialize. So if you try to use a letvariable before the declaration, you’ll get a “Reference Error.“
Consider the below code snippet to validate the same:
<html>
<body> Demonstrating let hoisting in javascript:</br>
<script type="text/javascript">
document.write("The value of let variable is: ", letVariable, "</br>");
let letVariable = 5; // let variable declared later on
</script>
</body>
</html>
As is evident from the above code snippet, JavaScript raises “Uncaught ReferenceError“ if we access the variable as let before its declaration.
Const keyword: What, How, and Where?
What is the “const” keyword?
Variables declared with the “const” keyword maintain constant values and can’t change their values during the scope. Its syntax looks like below:
Syntax:
const variable = value1;
Scope of const:
Similar to let, the scope of the const variables is also blocked.
The following code snippet will help us understand it better:
<html>
<body> Demonstrating const scopes in javascript:</br>
<script type="text/javascript">
const globalVariable = 5;
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
if (globalVariable == 5)
{
// This following statement will give error, as const can't be assigned a new value
// globalVariable = 10;
const localBlockVariable = 15;
document.write("The value of global variable inside block is: ", globalVariable, "</br>");
document.write("The value of block local variable inside block is: ", localBlockVariable, "</br>");
}
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
// This following statement will give error, as the local block variable can't be accessed outside
// document.write("The value of block local variable outside block is: ", localBlockVariable, "</br>");
function updateVariables() {
// This following statement will give error, as const can't be assigned a new value
//globalVariable = 20;
const localBlockVariable = 25; // This will be considered a new variable
const localFunctionVariable = 30;
document.write("The value of global variable inside function is: ", globalVariable, "</br>");
document.write("The value of block local variable inside function is: ", localBlockVariable, "</br>");
}
updateVariables();
document.write("The value of global variable outside function is: ", globalVariable, "</br>");
// This following statement will give error, as the local block variable can't be accessed outside
// document.write("The value of block local variable outside function is: ", localBlockVariable, "</br>");
// This following statement will give error, as the local function variable can't be accessed outside
// document.write("The value of function local variable outside function is: ", localFunctionVariable, "</br>");
</script>
</body>
</html>
The above code snippet depicts that const variables are block-scoped and can’t update with a new value.
Re-declaration of const variables:
Similar to let variables, the variable declared using const can’t be re-declared.
We can easily demonstrate it with the help of following code snippet:
<html>
<body> Demonstrating const re-declare in javascript:</br>
<script type="text/javascript">
const globalVariable = 5;
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
if (globalVariable == 5)
{
// The following statement will raise an error
// const globalVariable = 10;
document.write("The value of global variable inside block is: ", globalVariable, "</br>");
}
// The following statement will raise an error
// const globalVariable = 15; // Re-declare in same scope
document.write("The value of global variable outside block is: ", globalVariable, "</br>");
</script>
</body>
</html>
The above code snippet makes it clear that the const variables can’t re-declare.
Hoisting of const:
Just like “let,” “const” declarations hoist to the top but don’t initialize.
Consider the below code snippet to validate the same:
<html>
<body> Demonstrating const hoisting in javascript:</br>
<script type="text/javascript">
document.write("The value of const variable is: ", varVariable, "</br>");
const constVariable = 5; // const variable declared later on
</script>
</body>
</html>
As is evident from the above code snippet, JavaScript raises “Uncaught ReferenceError” if we access a variable that we declare as const before its declaration.
Conclusion
In this article, We understood the Javascript Variables Let,Var and Const Concepts . We conclude that :
If you declare a variable using the “var” keyword, it will be in the global scope(accessible to the whole program) if declared outside all functions. It will have a local scope(accessible within the function only) if defined inside a function.
If you declare a variable using the “let” keyword, it will be blocked scope, i.e., any variable declared using let, will be accessible with the surrounding curly brackets ({ }) only.
If you declare a variable using the “const” keyword, you will not be able to change its value later on. As per scope, it will be the same as variables declared using the “let” keyword.
Thanks for reading ! I hope you enjoyed and learned about JavaScript different types of Variable Concepts. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.
Hello Readers, CoolMonkTechie heartily welcomes you in this article.
In this article, we will learn about JavaScript Timeout Concept. There can be multiple scenarios where a programmer decides to execute a function at a later time, instead of running it instantaneously. This kind of behavior is called “scheduling as call“ or “scheduling a timeout”.
For better understanding about the timeout concepts, We will discuss the below following the list of topics which we are going to cover in this article:-
What is Timeout in JavaScript?
How to Schedule a Timeout in JavaScript?
How to clear the Scheduled Timeout in JavaScript?
A famous quote about learning is :
” Tell me and I forget, teach me and I may remember, involve me and I learn.”
So, Let’s begin.
What is Timeout in JavaScript?
We are sure that every programmer would have faced the following scenarios during their development career:
A specific function developed need to wait for some time before performing a particular task or triggering an event.
A specific function or task needs to repeat at a predefined interval.
Now, to handle all such kind of scenarios, JavaScript provides the “Timeout” functionality. Moreover, this functionality essentially allows the JavaScript developer to specify in a script that a particular function or piece of JavaScript code should execute after a specified interval of time has elapsed or should be repeated at a set interval time only.
How to schedule a Timeout in JavaScript?
JavaScript provides two methods to achieve the scheduling functionality. These are:
setTimeout()
setInterval()
Let’s discuss both of these functions in detail:
1. setTimeout()
This function allows us to run a function once after the specified interval of time. Additionally, its syntax looks like below:
Syntax:
let timerId = setTimeout(function, timeInMilliseconds, <em>param1, param2, ...</em>);
Where,
function: This parameter specifies the function that needs to execute. Additionally, it is a mandatory parameter.
timeInMilliseconds: The parameter specifies the “number of milliseconds” to wait before executing the code. If omitted, we use the value 0. Additionally, it is an optional parameter.
param1, param2, … : These are the additional parameters to pass to the function. Moreover, these are the optional parameters.
Let’s understand the usage of setTimeout() function with few examples for some sample scenarios:
Example 1: Wait for Alert
Now, consider a straightforward situation, that the user needs to display an alert after 2 seconds. One can achieve this with the help of setTimeout() as shown in the below code snippet:
<html>
<body>
Demonstrating setTimeout in javascript
</br>
<script type = "text/javascript">
setTimeout(()=>{
alert("WelCome!!");
},2000)
</script>
</body>
</html>
It is evident from the above code that the browser will display the alert with “WelCome!!” as text after 2 seconds of the page load.
2. setInterval()
This method allows us to run a function repeatedly, starting after the interval of time and then repeating continuously at that interval. Additionally, its syntax looks like below:
Syntax:
let timerId = setInterval(<em>function, timeInMilliseconds, param1, param2, ...</em>)
Where,
function: This parameter specifies the function that needs to execute. Additionally, it is a mandatory parameter.
timeInMilliseconds: The parameter specifies the intervals (in milliseconds) on how often to execute the code. If the value is less than 10, we use the value 10. Also, it is an optional parameter.
param1, param2, … : These are the additional parameters to pass to the function. Moreover, these are the optional parameters.
Let’s understand the usage of setInterval() function with the following example:
Example: Display a digital clock
Let’s modify the above scenario of displaying a digital clock using the setInterval() method as shown in the below code snippet:
<html>
<body>
Demonstrating setInterval for displaying a clock in javascript:
<p id="txt"></p>
<script>
const myLet = setInterval(myTimer, 1000);
function myTimer() {
const date = new Date();
const time = date.toLocaleTimeString();
document.getElementById("txt").innerHTML = time;
}
</script>
</body>
</html>
As it is evident from the above code snippet, it displays the digital time that gets updated every second using the setInterval() method.
How to clear the Scheduled Timeout in JavaScript?
To cancel the scheduled tasks, JavaScript provides two methods:
clearTimeout()
clearInterval()
1. clearTimeout()
This method clears a timer set with the setTimeout() method and prevents the function set with the setTimeout() to execute. Additionally, its syntax looks like below:
Syntax:
clearTimeout(timerId_of_setTimeout)
Where,
timerId_of_setTimeout: This parameter is the ID value of the timer returned by the setTimeout() method. Moreover, it is a mandatory field.
Let’s understand the functionality of clearTimeout() in detail with the help of following code snippet:
<html>
<body>
<button onclick="startCount()">Start count!</button>
<input type="text" id="txt">
<button onclick="stopCount()">Stop count!</button>
<p>
Click on the "Start count!" button above to start the timer
</br> Click on the "Stop count!" button to stop the counting.
</p>
<script>
let count = 0;
let time;
let timer_flag = 0;
function timedCount() {
document.getElementById("txt").value = count;
count = count + 1;
time = setTimeout(timedCount, 1000);
}
function startCount() {
if (!timer_flag) {
timer_flag = 1;
timedCount();
}
}
function stopCount() {
clearTimeout(time);
timer_flag = 0;
}
</script>
</body>
</html>
The above code snippet file shows how using the clearTimeout() method. The user can prevent the function from executing, which set using the setTimeout() method.
2. clearInterval()
This method clears the timer set with the setInterval() method or can cancel the schedule created using the setInterval() method. Additionally, its syntax looks like below:
Syntax:
clearInterval(timerId);
Where,
timeId: The parameter signifies the timer returned by the setInterval() method. It is a required field.
Let’s understand the functionality of clearInterval() in detail with the help of following code snippet, where user can invoke the clearInterval() method to stop the digital clock initiated using setInterval() method:
<html>
<body>
Demonstrating clearInterval for displaying/Stopping a clock in javascript:
<p id="txt"></p>
<button onclick="stopWatch()">Stop Timer</button>
<script>
const myInterval = setInterval(myTimer, 1000);
function myTimer() {
const date = new Date();
const time = date.toLocaleTimeString();
document.getElementById("txt").innerHTML = time;
}
function stopWatch() {
clearInterval(myInterval);
}
</script>
</body>
</html>
In the above example, as soon as one clicks the Stop Timer button, the setInterval function terminates.
That’s all about in this article.
Conclusion
In this article, We understood about JavaScript Timeout Concepts. We conclude that :
If we want to schedule the execution of a task/function, the same can be achieved in JavaScript using the setTimeout() and setInterval() methods.
If we want to cancel the already scheduled tasks, the same can be achieved in JavaScript using the clearTimeout() and clearInterval() method.
Thanks for reading ! I hope you enjoyed and learned about JavaScript Timeout Concept. Reading is one thing, but the only way to master it is to do it yourself.
Please follow and subscribe us on this blog and and support us in any way possible. Also like and share the article with others for spread valuable knowledge.
If you have any comments, questions, or think I missed something, feel free to leave them below in the comment box.