- Register your business: Depending on your state, the process may vary.
- Create a virtual business plan: A well-structured business plan helps maximize resources and attract investors.
- Establish communication expectations: Communication is crucial for virtual business owners.
- Create a website or store platform: Create a platform for online shopping.
- Hire remote employees: Offer diverse learning experiences for all team members.
- Set learner expectations: Set expectations for learners from the start of the session.
- Virtualize servers, storage, networks, desktops, and applications: Virtualization improves efficiency, security, scalability, and continuity.
- Consolidate multiple virtual environments onto a single physical server: This can significantly reduce the number of servers.
- Register your business on VMware.com: Create an account and register under your business name.
- Consider virtualization technology: Intel/AMD 64-bit hardware is suitable for small and medium-sized businesses. AMD is preferred for lower costs and energy savings.
Innovative virtualization techniques can enhance your small business’s IT infrastructure, security, and remote work capabilities.
📹 Virtual Machines vs Containers
This is an animated video explaining the difference between virtual machines and containers. ▻▻RoboForm Everywhere …
Is virtualization good for small business?
Virtualization is a crucial tool for small businesses, offering data protection and disaster recovery by consolidating physical servers into virtual environments. This helps to isolate critical data and applications from potential hardware failures, streamlining operations and reducing costs. For IT professionals, virtualization can be a game changer, improving efficiency, security, scalability, and continuity while lowering expenses. By virtualizing servers, storage, networks, desktops, and applications, small businesses can improve security, scalability, and continuity while reducing expenses.
Implementing virtualization involves creating virtual versions of computing resources like servers, storage devices, networks, and systems, allowing multiple virtual systems to run on a single physical system. This approach can be beneficial for both new and existing businesses, providing actionable advice to drive improvements through virtualization.
Is virtualization safe for PC?
Virtualization offers several benefits over physical computers, such as isolation from operating systems and hardware, integration of high-security tools, and portability. However, it also has vulnerabilities that can invite cyber invaders if not properly secured. Virtual Machines (VMs) are like computer miniatures running simultaneously with individual OS on a single hardware, making it convenient to spin them as per workload. It is crucial to ensure the security of applications and operations for each VM to prevent potential threats.
Does virtualization slow PC?
Virtualization in gaming can offer benefits like reduced latency and increased performance, but it can also negatively impact game performance due to the sharing of resources. Overuse of multiple CPUs can cause more latency and delays in games. Despite these drawbacks, CPU virtualization is beneficial for gaming as it allows users to play games on computers with lower performance specifications, making gaming more accessible to a wider audience.
Virtualization also allows users to share resources they don’t have, making gaming more accessible to a wider audience. The creation of a virtual version of a server, desktop, or operating system can grow the gaming industry and increase the number of users playing various games.
How can virtualization save an organization money?
Virtualized environments are a cost-effective solution for consolidating applications, as they reduce the need for physical servers and save on server costs. This also reduces downtime and enhances resiliency in disaster recovery situations. Virtualized environments are easy to provision and deploy, allowing for quick recovery of affected virtual machines. This reduces the time needed to replace or fix physical servers, enhancing the environment’s resiliency and improving business continuity. Overall, virtualized environments offer a more efficient and cost-effective solution for organizations.
What is virtualization in cloud computing?
Virtualization is a process that creates a simulated computing environment, allowing organizations to partition a single physical computer or server into multiple virtual machines. These virtual machines can interact independently and run different operating systems or applications while sharing the resources of a single host machine. This improves scalability and workloads, reducing the use of fewer servers, energy consumption, and infrastructure costs.
Virtualization can be categorized into four main types: desktop virtualization, network virtualization, software virtualization, and storage virtualization. Desktop virtualization allows a centralized server to manage individualized desktops, network virtualization splits network bandwidth into independent channels, software virtualization separates applications from hardware and operating systems, and storage virtualization combines multiple network storage resources into a single device for multiple users.
How to create a virtual machine in cloud computing?
This guide outlines the process of setting up a virtualization environment on a computer to run Virtual Machines. The guide primarily focuses on the VirtualBox hypervisor, a free, multi-platform, open-source tool. Other hypervisors, such as VMware, are also available for SCS Computer Science Students and are supported with some documentation. Other hypervisors include KVM, Hyper-V, and Parallels, but technical support is not provided for them.
To prepare your computer for virtualization, follow these steps:
Prepare your host computer by checking the following:
This guide is not intended to provide detailed steps for every operating system or computer model. It is a general guide and users should search for the procedure for their specific operating system and computer model. By following these steps, users can create a virtualization environment that allows them to run Virtual Machines on their computer.
Is there a downside to virtualization?
Virtualization is a technology that creates virtual representations of computing resources, allowing more efficient utilization of physical hardware. It allows for the creation of multiple virtual instances of a resource or application, such as a server, desktop, storage device, or operating system. However, it can be expensive to set up and manage, especially as the number of virtual machines increases.
Virtualization is a technique that allows for the sharing of a single physical instance of a resource or application among multiple customers and organizations. The host machine is the machine on which the virtual machine is created, while the guest machines are the virtual machines created on the host machine.
What is the most popular virtualization solution in business usage?
The top virtualization platform technologies in 2024 include VMware, Citrix Workspace App, and VMware vSphere. VMware holds the largest market share at 44. 51, followed by Citrix Workspace App at 15. 72 and VMware vSphere at 10. 97. Over 235886 companies use these tools, with the majority falling in the 20-49 employee category. Other top technologies include Citrix Workspace App and VMware ESXi. Companies with larger company sizes also have a larger market share in these technologies.
Is virtualization costly?
A virtual server is a virtual machine that runs server applications and is more affordable than physical servers from a capital expenditure (CAPEX) standpoint. However, they can be expensive in the long run. This article will help you understand the differences between physical and virtual server costs, evaluate server types based on use cases, understand the difference between owning and renting a server, understand factors determining virtual server cost, and learn cost reduction best practices.
A physical server is a server that occupies physical space in a data center, office, or specific area and can be physically seen and handled. It can be transported, assembled, or disassembled.
How much money does virtualization save?
Virtualization has led to significant cost savings in data centers, with nearly $150, 000 in direct cost savings and over $130, 000 in indirect cost savings. Previously, data center operators installed one application per server, with a ratio of three to five physical servers per application. This approach resulted in low utilization rates, with only a fraction of computing resources engaged in useful work. However, a 2014 study by NRDC found that average server utilization was still between 12 and 18 percent.
Today, virtualization allows for multiple applications per server, allowing for fewer physical servers in a data center, with each remaining server operating at higher total utilization. This saves energy as a virtualized data center requires fewer servers to accomplish the same amount of work. Virtualization also enables faster deployments, improves scalability, and reduces downtime. It also speeds up disaster recovery efforts by allowing faster application restarts.
Virtualization allows for the quick movement of entire systems from one physical server to another, optimizing workload performance or performing maintenance without causing downtime. Some virtualization solutions have built-in resiliency features, such as high availability, load balancing, and failover capabilities.
What are some examples of how companies use virtualization?
Server virtualization allows businesses to run email servers, CRM systems, and databases on separate virtual servers within one physical server, maximizing hardware resources. Data virtualization allows data to be retrieved from multiple sources using one application or point of access, allowing businesses to manage data stored in different databases, systems, locations, and formats as if they were stored in one central location.
For example, a small retail business could use data virtualization to provide a unified view of sales data from its physical store, online store’s SQL database, and cloud data, enabling more effective analysis and data-driven decisions for future sales promotions and inventory management.
📹 Virtual Machine (VM) vs Docker
Is Docker just a lightweight virtual machine? It’s true that both have one thing in common, namely virtualization, but there are …
Very well done! Two other things to consider: Another “con” to VM’s is maintenance and updates. Each VM is a running instance of an operating system, and as you point out it has to be licensed. It also has to be patched/updated/cared for, just like any other server. On the container side, one problem there is persistence. Deploying containers that have databases, or other data stores that need to “stick” is challenging. Containers are great because you can deploy them, move them around, and tear them down quickly and easily. Not so easy if they provide the persistence.
your articles are the best on youtube, your voice is monotone and robotic, but is actually soothing and you explain things better than my it teachers, and your animations are great and i dont have to watch somebody talk about something i can’t see lol You helped me get my Aplus last month and my AZ900 last week. you taught me what ram and routers were 3 years ago when i took my first laptop apart lol
I appreciate you trying to explain this. I’m 90 seconds in and you’ve already misrepresented the history history of virtual machines and containers. You land on the right conclusion, but not articulating the history accurately misleads the audience into thinking that virtualization and containers are a new phenomenon, which they aren’t.
A few slight (and common) misconceptions, but overall a nice article. Thanks for putting it out. There is no way RoboForm is ranked the #1 password manager by any reasonable measurement. They appear to hold no certifications and don’t publish CAIQ assessments, SOC 2 reports or third party security reviews, nor do they have a vulnerability disclosure program. I don’t see why anyone should trust them above the top players in that space.
Wow, bro, a huge thumbs up for you and all your articles. The way you break these complex theories into practical bits amazes me. Even a beginner without IT knowledge can become an IT expert overnight by perusal your articles. I wish you could do more articles on server administration with Windows, Linux, UNIX, and SQL servers. Pls keep this great job up, you are helping and saving lives.
Hallo, thank you very much for your most efficient articles. English is not my first language. (Not native English) Despite this, I understand everything perfectly. The speed of the language and the way the topics are explained are excellent. Although you explain the complex topics professionally and do not leave out any technical context, everything is extremely understandable. The animations are excellent and contribute to understanding extremely effectively. Thank you again for your effort. I will recommend you without reservation.
We ran large ESX deployments across two data centers on HP blade servers… literally hundreds of virtual servers for all sorts of healthcare apps and for virtual desktops the users ran for those apps. I see Docker as the next level of application deployment on top of those VMs. ESX (like other virtual OS platforms) provided the ability to physically distribute systems across those data centers and their hardware pools, either for load sharing or disaster recovery (in the even one DC had issues). With ESX, you can “float” the servers between machines almost at will. Mixing ESX with Docker seems like a good combo to bring application deployments in DCs to the next level.
Please take note IBM has been Virtual Machines since 1969 with VM/370 (written by MIT between 1967 and 1969). Today z/VM and VM firmware can run Unix, Linux, MVS, VM under VM, CMS, DOS, CICS, etc. There is specialized hardware to guarantee 100% uptime and quick encryption. VM/370 is the first and longest available Virtual Machine software in the world! I should note running an OS under VM is quick to boot and run applications. IBM put a lot of VM into the hardware. When a guest OS is running VM gets out of the way until needed (such as a privileged instruction). IBM z/systems are very fast, secure, and allow for no downtime. You can also have devices up to 50km (about 30 miles) sway from the system.
Thanks for the article, nice summary! You say that containers share the underlying operating system, and a container contains the application only. As a disadvantage you mention that they must be packaged the same operating system of the server. My understanding and experience is different. A container actually do have an own operating system, but that is pretty lightweight. For example, Busybox is just a 1.2 MB Linux distro. Alpine is ~5MB. So they are really small, and they startup quite quickly. Therefore the mentioned disadvantage also does not apply. We can use Alpine in Windows environment, so in this case the host would be Windows and the guest (i.e. the container) would be Linux.
Except that we used to (back in the olde days of the 90s), before VMs and Containers, run multiple applications on a single server by having each application use a different port and by taking advantage of the process manager. So, your opening statement isn’t quite right. For example: It was common for us to save resources by running the database and the webserver in a development or staging environment on the same server.
0:37 – if I am not mistaken Linux beeing a Unix-like operational system has the hability to ran multiple services with one single servers. I did it is the pass many times since 1997 and so on. In fact Unix and Linux included has time sharing based and multitasking that’s allowed this OS to do such work.
Good stuff yet again PowerCert! I want to say from an engineering point of view that running containers inside of a vm is a super bad idea because they’re both made to handle same problem, running a lot of services on a machine. You can make it work but because you can doesn’t mean you should. Running two layers of virtualization is wasteful and more complex that’s needed. Docker is cool due to ease of use, hypervisor is cool due to flexibility it gives. They’re both amazing when done right!
6:39 In general, containers are designed to be portable and can be run on any system that has the necessary container runtime installed. The operating system of the host system does not need to match the operating system of the container. Containers allow applications to be packaged with their dependencies and run in a predictable and isolated environment, regardless of the host system. This is achieved by using a container runtime, such as Docker, to abstract away the underlying host system and provide a consistent interface for running containers. That being said, there may be certain scenarios where the operating system of the host and the container need to match in order for the application to run correctly. For example, if the application is compiled for a specific operating system or if it relies on features that are specific to a particular operating system. However, these cases are relatively rare and most containers can be run on any system that has the necessary container runtime installed.
As far as I know docker container has the entire OS packaged inside. Yes, the OS is reduced in size. Most if it being stripped down and thrown away. But if you take a container of, let’s say, Alpine you can still install every package that is present in the desktop version including graphical environment. And in the very same sense that you use virtual machines you can use docker container as a linux VM on your windows machine. So imo docker is just a VM in the end with stripped down internals and on-demand VM download.
The other issue we had in the past was portability. We had thousands of servers and tens of thousands of apps and we needed to monitor their CPU and RAM usage and balance them among the servers in a cluster. We needed to move the file system, appl startup config, user accounts, and IP addresses around as a single unit across servers and then add capacity as necessary. The other issue was shared storage as we did not want to co-locate apps on the same lun because each application owner paid for their own storage and expected a level of capacity and performance according to their SLA. This was a real challenge to manage this. We used other technologies in the past such as partitioning or even Veritas Cluster Server to facilitate a homegrown version of containers called vtiers but it was a big pain to design, deploy, administer, bill for, and even more challenging to manage backups and disaster recovery of applications when organized this way. Vmware and containers have made this so much easier and vmotion is heaven sent. (I have been a data center sysadmin for the last 30 years)
A little precision : Docker engine is not the only engine for running containers. Some technologies such as LXC (Linux Containers) exist from a long time ago, and are way older than Docker. However, Docker is so popular and widely used because it makes the process of running containers really easy and fun Nice article, as always ! Keep up the good work buddy, your website is amazing
A concise and easy-to-grasp article about an important distinction; well done! 3:55 The word “to” is missing from the “…necessary for it (to) run.” that’s onscreen. 6:55 Less critically, the word “all” is not onscreen when describing the results of an OS crash on a containerised system. If you hadn’t taken the time to present the potential drawbacks of containers, this would seem like an ad for Docker (which was not capitalised, but that may be part of that company’s image and branding.
This is a fair explanation, save for a few details: VMs are hardware-based. The CPUs have special features such as SLAT (second-level address translation) to support hypervisors in partitioning the machine. On the other hand, containers come in various flavors. We distinguish four on Windows, two of them being surfaced through Docker, which is just a management stack: process isolation, which is what you described and typically used to ship applications and their dependencies and share the host’s kernel, and hyper-v isolation, which are much closer to actual VMs and boot their own kernel. That second kind is a security boundary while the first kind is not. Then there are (even) more lightweight containers that share everything but virtualize the file system and registry, and that you can see as “app containers”, and also heavier containers that are also VM-like and can run containers inside them, aka nested virtualization. One example is Windows Sandbox which looks like a VM but boots very quickly, another is Windows Defender Application Guard which runs browser tabs with their own separate kernels, and thus are security boundaries. These really are container but of a heavier kind that, unlike regular VMs, work with the host for things like sharing the host’s memory pool and CPU resources while keeping the kernels (and the rest of the OS) separate from each other.
Indeed, when it comes to on-premises infrastructure, running containers typically involves utilizing physical servers as nodes in the cluster. However, the scenario is slightly different in the context of well-known public clouds. In such cloud environments, containers are usually executed on virtual machine nodes. This remains true unless you opt for SaaS container services, where you might have less control and encounter a different pricing model.
This is a great explanation but i would disagree on a couple of points. 1 – Depending on the virtual server, they individual VMs may not take long to boot. I have a Win10 VM running on a Dell server using ESXi 7.0 that takes 14 seconds to boot to the log in prompt. 2 – They will only use the memory / disk that you allocate them to use for each individual VM to do its job. Thank you for the article.
Thank you SO much for your fantastic, informative, easy-to-understand articles that make these concepts easy to understand. Have you given any thought to doing an updated series of articles on the current Comptia A+ exam? The 1101 and 1102 series? I am already using several of your articles to help with my studies, but it would be great to see the entire series covering every topic.
VM: IaaS (OS, Application, Data). Containers: PaaS (Application, Data). Cloud Service: SaaS (Data). One advantage of virtualizing… is can move items around… the container can be moved from one server to another without disruption… or run in multiple instances for failover/load balancing, or even geo-redundancy so it’s available from different data centres for faster response.
The initial statement that servers were/are unable to run multiple applications on one server securely is pure nonsense. If that were true, you wouldn’t be able to run multiple VMs, let alone docker, on one server either. Aside from that nitpick, excellent summary of the benefits and drawbacks of VMs vs Containers.
Nice article regarding the comparison terminology between both solutions.. What always bothered me was; what is the difference between containers and appliances since they both can be packaged for application distribution?? Also; its very common that containerization has a high footprint on virtual platforms…
To anyone reading comments, OP gets containers almost absolutely WRONG. Containers are literally miniaturized operating systems. They do NOT share the host operating system. The container runtime (e.g. Docker) translates system calls to the host OS. I think you meant that it shares the kernel, memory, and storage resources? If you pull a Linux container and run it on a Windows machine, the Linux container obviously is not sharing the host operating system.
I think Docker containers are also less secure than VMs because VMs are much more isolated? Also, I don’t understand how exactly a container can boot up in milliseconds? For example, if there’s a web application running on NodeJS, will it not have to install all npm dependencies first? Or everything including the dependencies are already installed in the container and booting the container just requires starting the NodeJS sever?
Wow! Someone that is actually recommending Roboform! I have been using it since 2006. The main reason I prefer it over the others is the fact that it is an ACTUAL application (Not a web extension) I have used other password managers (Like Last Pass & Bitwarden) I do not care for web based interfaces. Roboform has both a web interface and a windows interface for old school techs like me.
When you say that “containers share the underlying OS that’s on the server between them” 5:35, then why do we need to specify a base image (like linux alpine) as the first step when we build a Docker container? Would’nt we need to specify the same OS as the server? or even, not specify an OS at all?
How about the shared HW resources? For example if I want to run a SW, which requrie a HW USB dongle, can I use any of those? Can I restrict somehow this USB port is dedicated to this container or VM, the other one is for the other one? If the apps are using ethernet ports in differtent VMs, or containers, a common virtual switch is also emulated? Interesting topic, thank you for the article
Keep in mind that VMs dedupe common read only memory blocks to conserve RAM. i.e. if you have 20Linux VMs running on the same hypervisor, the kernel and other read only code only exists once in RAM. This greatly impres efficiency as this con code instance s a better chance to remain in CPU caches. Same for storage… if you use subsystems like NetApp dedupe identical disk blocks to reduce actual usage those common blocks are more y t req is the disk buffer caches as well.
Uh, servers run more than one app or service dude. You obviously have never run Citrix or SBS……or an AS400. The one app per server model is pushed by MSPs so they can justify lots of VMs on small accounts. Let’s see, you need a print server and a file server, and a DHCP server, and a DNS server, and an AD server, etc. Oh yeah…you only have 8 users. Containers are just application streams, but run centrally vs deployed on each end point.
While my Xubuntu VM takes 7 seconds to boot, loading the 1.1GB LibreOffice container takes more time. Performance differences are not that big, VMs run above 95% of the raw hardware speed. The disk sizes are significantly larger for VMs, but in a VM you could use more apps and/or more containers 🙂 As a home user I have an encrypted Virtualbox VM (Ubuntu 16.04 ESM), that I use exclusively for banking and within that VM I run containers (the latest stable snaps) for Firefox and LibreOffce (Calc). ESM = Extended Security Maintenance till 2026-04.
Virtual machines were around long before Windows and Linux. For example, IBM released VM/370 back in 1972, so it was more of a rediscovery than and innovation. Also, some architectures allowed machines to be partitioned at the firmware level even before that time. In the case of IBM’s CMS operating system, which was a single user OS aimed mainly at developers, it was designed to avoid needless duplication of operating system code in memory by using shared code between OS images. Of course, that can only be done if the guest OS is written to facilitate it. Also, the one application per operating instance approach was not the norm back in the 1970s when running multiple applications on the same operating system image was common. It just became common to have dedicated operating system images per operating system instances when cheap hardware made it possible for developers to have dedicated development machines per project or even per developer. Then the art and discipline of writing and developing applications which could share and OS image was rather lost, albeit to be fair, there are often good reasons why you want them separate.
I know you tried to make the article as simple as possible, but still need to keep in mind that it is false that container contain all applications needed for getting web-site working, usually it is a few containers which communicate with each other for getting a web site working. So it is easy to maintain and upgrade each container independently from others.
Containers don’t take milliseconds to initialize in the same way VMs can take minutes to boot. Container metrics, like initialization, is typically measured in seconds. So it’s not realistically feasible to determine how many milliseconds a container takes to initialize as most of the metrics data can’t measure that precisely. Additionally, containers, even extremely lightweight ones, don’t init that fast. Even with beefy hardware.
So a container doesn’t provide the OS like a VM does? So how can I run Node cobtainer, which is based on a Linux container, on my Windows laptop? Is it because my Windows already inculde Linux inside it? Also, if the containers don’t provide the OS, then why do they how do they each have their own file system with unique files and folders? Is it because that they just use the OS of the physical/virtual as a template, but they have their own memory for files and folders? Like, could you say that the OS is like a class and containers use an instance of that class? But if so, why would the crash of the OS crash all other containers?
Hello sir, love your articles. I have a query regarding IP addresses. Since, normally most people buy dynamic IPs, they have the ability to change their IPs every once in a while. The most common method being that of unplugging your wifi for a few hours or releasing your current IP for a few hours. However, this is very tedious and you never know how much time it’s going to take. Is there any method where you can instantly release your current IP and renew it with a new one?
I think vms were started for other reasons, as applications used different ports on servers, so you could host easily more than one service if you really wanted to. Servers can handle quite a bit, once people learned how to program bottlenecks in the system, I believe they did to make more money (prices did not go down with major updates to php, did they? Docker concept nothing really new, its just going back to the win95 model where the os is not sticking its nose in everything. Win95 apps lightning fast, they just had big memory issues that they solved years later. VM and containers solve a few problems you went over, nice job.
I am not sure, VMs, at first, you do not need to boot up them often and additionally, they boot up in seconds. Could be, some huge VM takes minutes to boot up, but as you also mentioned, there is usually one service-software running and server is not that big. I do not know about google or facebook etc. billions of users system, there could be all different of course.
One big difference between the two is the security of the host system: If you run something as root in a VM, it stays in the VM and has no effect on the host OS. You can completely hose the guest system with a botched command ran with root privileges, and the host OS is completely unaffected. However, if you run something as root in Docker, it will be run with root privileges in the host OS. And since you can set a different root password in a docker container, including no password at all, that means you can run stuff with root privileges without a root password from Docker, in a way that affects the host OS (eg. accesses, creates or modifies files in the host OS with root privileges). I’m not making that up, just search Docker’s own documentation on security.
Little Update for you as Docker also Handels WASM, And is mostly only a qemu like abstraction over libcontainer, Docker has Virtual Hardware i can even do cpu instructions written in JS. So i guess you view it out of a single perspective. When i now Run RISC-V which is a CPU Design and so a Full System i Run Full System Emulation Something much lower then the Hypervisor. or a Virtual Machine Monitor. You should in general maybe update your knowledge with resources about RISC-V
More and more dockers are seen as a way to make Research Data reproducible. Still, the fact that one can certainly reproduce a docker, the problem of reproducibility of scientific research data and code will then depend on how well dockers themselves will be reusable in the future. Also, this detracts from the efforts of using more interoperable formats and protocols. I’m really undecided on whether docker containers are really a good or bad solution for reproducibility.
It’s a choice to buy subscription or to use free to play model. Government helps local business and builds all other services around it. Reason why you need Docker in this case, you aren’t able to pay for games, which require monthly or yearly fee or must be bought as stand alone package and politicians are shooting her mouth about it. All of that is a problem after paying all bills dreaming about gardening things to have someday. First time for virtualization was sometime ago preparing for Lab 1 experiment, learning basics and after it never needed it.
Heavy sigh… A generic term on the left and a concrete implementation on the right. An IBM article is blowing off AIX WPARs (also containers) completely? I get not mentioning Solaris zones, but really? And, zero mention of docker as a (generally) 1 by 1 container technology that often needs an extra/external orchestration layer to manage more complex stuff? Some good info but not treating the subject as I might hope to see an intro on youtube…
All I know about Docker is it’s just hugely resource hungry and slow and on top of that I can’t think of a single application I’d use it for which I can’t just download and run without it. There has been a huge amount of publicity behind Docker and Kubernetes and I just can’t find a use for either of them. What I need is not a “how does it work” article it’s a “Who actually uses this and why?” article.