An assignment scored Distinction in cloud computing. This assignment presents a report of a toyStore website using Nodejs, MVC model, encryption methods, security, fullstack website, including decentralization
Overview cloud computing
A Brief History of Cloud Computing
Figure 1: A Brief History of Cloud Computing (Foote, 2021)
The origins and development of cloud computing trace back to the 1950s and 1960s During the 1950s, the high cost of mainframe computers led to the emergence of time sharing in the late 1950s and early 1960s This approach allowed multiple users to share access to a central mainframe, optimizing processor time and reducing idle periods This concept marked the early stages of shared computing resources, which is a key aspect of today's cloud computing (Foote, 2021)
The concept of providing computing services over a global network began taking shape around 1969 American computer scientist J.C.R Licklider played a crucial role in developing the Advanced Research Projects Agency Network, a precursor to the modern internet His vision was to interconnect computers globally, allowing access to programs and data from anywhere
By the 1970s, cloud computing started to become more concrete with the advent of the first Virtual Machines (VMs) These VMs allowed multiple computing systems to operate on a single physical setup, leading to the idea of virtualization, which significantly impacted cloud computing's evolution
During the 1970s and 1980s, companies like Microsoft, Apple, and IBM developed technologies that furthered cloud environment capabilities, including cloud server and server hosting technologies In 1999, Salesforce emerged as the first company to offer business applications via a website, marking a significant milestone in cloud application delivery
Amazon's introduction of AWS in 2006, offering services such as computing and storage on the cloud, marked another major development in cloud computing This move prompted other major tech companies like Microsoft and Google to launch their own cloud services, creating a competitive cloud computing landscape
What is Cloud Computing?
Cloud computing refers to the delivery of hosted services over the internet This broad term encompasses three primary categories: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) (Wesley Chai, Stephen J Bigelow, 2022)
There are two types of cloud environments: public and private Public clouds offer their services to the general public over the internet, whereas private clouds provide hosted services to a select group of users with specific access and permission settings Regardless of being private or public, the aim of cloud computing is to offer convenient and scalable access to computing resources and IT services
The infrastructure of cloud computing includes both the physical hardware and software components necessary for the implementation of a cloud computing model This model can also be referred to as utility computing or on-demand computing
The term "cloud computing" was coined from the cloud symbol frequently used to represent the internet in flowcharts and diagrams.
Client – server
Client
Clients, which can also be referred to as service requesters, are either computer hardware components or software applications on a server that request resources and services offered by a server Client computing falls into three categories: Thick, Thin, or Hybrid (heavy, 2022)
• Thick Client: This type of client offers extensive functionality, handles most of the data processing independently, and places relatively light demands on the server
• Thin Client: A thin-client server is a lightweight computer that heavily depends on the host computer's resources In this setup, an application server takes care of the majority of necessary data processing
• Hybrid Client: A hybrid client combines characteristics of both thin and thick clients It relies on the server to store persistent data but has the capability for local processing as well.
Server
A server is a device or computer program that offers functionality to other devices or programs It encompasses any computerized process that can be utilized or called upon by a client to provide resources and distribute tasks (heavy, 2022)
Some typical examples of servers include:
• Application Server: It hosts web applications that users on the network can access without needing their individual copies
• Computing Server: This server shares a significant amount of computer resources with networked computers that require more CPU power and RAM than what is typically available on a personal computer
• Database Server: It manages and provides databases for any computer program that processes well-organized data, such as accounting software and spreadsheets
• Web Server: This server hosts web pages and facilitates the functioning of the World Wide Web.
Relationship between Client and Server
In the client-server model, which is crucial for contemporary networked computing, the connection between client and server is marked by a reciprocal and synergistic interaction Both clients and servers assume specific, yet complementary roles, collaboratively ensuring the seamless exchange of data and services within a network (Contributor, 2023)
Characteristics of the Client-Server Relationship:
• The flow of digital data operates under a distinctive client-server model Imagine a client, akin to your smartphone, that doesn't need to store every app and file Instead, it interacts with a centralized server, akin to a well-stocked digital library, using a specific protocol This protocol facilitates efficient data transfer and communication, functioning similarly to a universal language (Joshjnunez, 2020)
• When the client requests a certain app or file, the server awaits this request Upon its arrival, the server verifies the identity of the client before granting access Once verified, the server promptly dispatches the requested item, be it a document, video, or software, to the client
• This process usually takes place over a network, which could be the vast internet or a private digital pathway This synchronized, service-like communication is governed by protocols like TCP/IP TCP acts as a vigilant server, ensuring a smooth transition from the initial request to the final delivery, comparable to waiting in a restaurant for your meal to be served In contrast, IP functions like an independent courier, dispatching separate data packets, each carrying a part of the overall information, similar to how different letters in a message convey various parts of the content
• Through their collaborative efforts, the client-server architecture enables access to a wide array of digital resources, transforming the way information is stored and retrieved from a solitary task to a dynamic exchange between multiple entities
Types of Client-Server Relationships:
• Client-server relationships can vary depending on the architecture and the specific needs of the network or application Here are some common types of client-server relationships:
• One-to-One (1:1): In this type, a single client communicates with a single server It's often seen in situations where highly specialized or secure communications are necessary
• One-to-Many (1:N): A single server provides services to multiple clients This is common in web servers where one server hosts a website accessed by many clients
• Many-to-One (N:1): Multiple clients interact with a single server This setup is typical in scenarios like email services, where many users (clients) access the same email server
• Many-to-Many (N:N): Many clients interact with many servers This is seen in more complex network architectures, such as cloud computing environments, where multiple clients use a variety of services from multiple servers.
Peer-to-peer (P2P)
A peer-to-peer (P2P) service is an autonomous system where two individuals engage in direct interactions without the involvement of a third party In this setup, the purchaser and vendor conduct transactions directly through the P2P service The P2P platform can offer various features like search functions, vetting, user ratings, handling payments, and facilitating escrow services (Hayes, 2021)
These services utilize technology to reduce the transaction costs associated with trust, enforcement, and information imbalances, which have historically been addressed through trusted third parties Peer-to- peer platforms provide users with services like payment processing, access to information about both buyers and sellers, and quality assurance
Key features the P2P network include:
• Contribution and Consumption of Resources: Every computer within a P2P network both contributes resources to the network and utilizes resources provided by the network These resources encompass items such as files, printers, storage space, bandwidth, and processing capabilities, all of which can be shared among different computers within the network
• Easy Configuration: Setting up a P2P network is a straightforward process Once established, access to resources is managed by configuring sharing permissions on individual computers For added
Page 12 of 59 security, more stringent access controls can be implemented through the use of passwords assigned to specific resources
• Overlay Networks: In some cases, P2P networks are constructed by layering a virtual network on top of a physical network infrastructure This approach allows the physical network to handle data transmission, while the virtual overlay enables communication between computers connected to the network
Types of Peer to Peer Network:
• Pure peer-to-peer: in a 'pure peer-to-peer network,' which is also called a fully peer-to-peer network, all peers participate equally, without the presence of a centralized dedicated server (BANGER, 2023)
• Unstructured Network: This type of network relies on random node communication and is suitable for applications with high activity levels However, it demands a significant amount of CPU and memory resources to function effectively Additionally, the hardware must support the maximum network transactions to ensure smooth communication among all nodes
• Structured Network: In contrast to unstructured networks, structured networks have organized interactions among nodes, allowing users to locate and utilize files more efficiently, eliminating the need for random searches However, they require higher maintenance and setup costs compared to unstructured peer-to-peer networks Nevertheless, structured peer-to-peer networks offer greater stability than their unstructured counterparts
• Hybrid Network: Hybrid networks combine elements of both peer-to-peer and client-server architectures by introducing a central server with peer-to-peer capabilities They offer numerous advantages over structured and unstructured networks, including strategic approaches, enhanced performance, and other benefits Hybrid networks are an appealing choice for networks that aim to leverage the strengths of both P2P and client-server systems
Decentralization and reduced dependency Security risks and vulnerabilities
Scalability and easy expansion Varied resource levels among peers
Lower infrastructure costs Legal and copyright issues
Fault tolerance Limited control over network behavior
Examples of Peer-to-Peer (P2P) Services:
• Open-source Software: Open-source software enables anyone to access and potentially modify its code It aims to decentralize the control of software by involving a community of contributors and users in coding, editing, and quality control (Hayes, 2021)
• Filesharing: Filesharing platforms connect uploaders and downloaders who exchange media and software files These services can include peer-to-peer networking, file scanning, and security features They may also allow users to anonymously share copyrighted material or, conversely, enforce intellectual property rights
• Online Marketplaces: Online marketplaces provide a platform for individual sellers to connect with potential buyers They may offer services like seller promotion, buyer and seller ratings based on their history, payment processing, and escrow services
• Cryptocurrency and Blockchain: Blockchain technology is a crucial component of cryptocurrencies
It establishes a decentralized network where users can conduct and validate transactions without relying on a central authority or clearinghouse Blockchain technology facilitates cryptocurrency transactions and the execution of smart contracts.
High performance computing (HPC)
Definition and Types of HPC
High-performance computing (HPC) is a technology that handles extensive multi-dimensional datasets, often referred to as 'big data,' and resolves complex problems rapidly by harnessing clusters of potent processors operating simultaneously To provide context, an average laptop or desktop featuring a 3 GHz processor can execute approximately 3 billion calculations every second While this speed far surpasses human capabilities, it pales in comparison to HPC solutions, which can execute quadrillions of calculations per second (netapp, 2023)
One well-known form of HPC solution is the supercomputer, comprising thousands of compute nodes working in unison to accomplish one or more tasks This approach is known as parallel processing and resembles a scenario where thousands of PCs are interconnected, pooling their computational power to expedite task completion
Parallel Computing involves employing multiple processing elements simultaneously to address problems These problems are divided into instructions and resolved concurrently, with each resource assigned to the task working simultaneously (universedecoder, 2021)
Here are the benefits of Parallel Computing compared to Serial Computing:
• Time and cost savings result from multiple resources collaborating, reducing both time and potential expenses
• Solving larger problems using Serial Computing can be impractical
• Parallel Computing can tap into non-local resources when local resources are limited
• Serial Computing underutilizes the available computing power, whereas Parallel Computing optimizes hardware performance
There four types of parallelism:
• Bit-level parallelism: This type of parallel computing relies on increasing the size of the processor It reduces the number of instructions needed to perform tasks on large data sets
• Instruction-level parallelism: Processors typically execute fewer than one instruction per clock cycle Instructions can be rearranged and grouped together, allowing them to be executed simultaneously without affecting the program's outcome This is known as instruction-level parallelism
• Task Parallelism: Task parallelism involves breaking a task into subtasks and assigning each subtask to be executed by processors concurrently
• Data-level parallelism (DLP): In DLP, instructions from a single stream operate simultaneously on multiple pieces of data However, this is constrained by irregular data manipulation patterns and memory bandwidth limitations
• The real world operates dynamically, with many events occurring simultaneously in various locations Managing this extensive data is a significant challenge
• Real-world data requires dynamic simulation and modeling, and parallel computing is essential for achieving this
• Parallel computing offers concurrency, resulting in time and cost savings
• The organization and management of complex, large datasets are only achievable through parallel computing
• It ensures efficient resource utilization, guaranteeing effective hardware utilization
In contrast, serial computation leaves much of the hardware idle
• Implementing real-time systems using serial computing is often impractical
Cluster computing refers to a group of computers, whether closely or loosely linked, collaborating to function as a unified entity These interconnected computers collectively perform tasks, giving the impression of a unified system Typically, these clusters are connected via high-speed local area networks (LANs) (shubhikagarg, 2021)
Why is Cluster Computing Significant?
• Cluster computing offers a cost-effective alternative to traditional large server or mainframe solutions
• It addresses the need for swift content criticality and processing services
• Numerous organizations and IT firms are adopting cluster computing to enhance scalability, availability, processing speed, and resource management while maintaining cost-efficiency
• It guarantees the constant availability of computational power
• Cluster computing offers a unified, vendor-independent approach for implementing and utilizing parallel high-performance systems
There three types of cluster:
• High-Performance (HP) Clusters: HP clusters employ computer clusters and supercomputers to tackle complex computational tasks They are ideal for tasks that require nodes to communicate while performing their functions These clusters are engineered to leverage the parallel processing capabilities of multiple nodes
• Load-Balancing Clusters: Load-balancing clusters distribute incoming requests for resources among several nodes running similar programs or hosting similar content This prevents any individual node from bearing an excessive workload Such distribution is commonly used in web-hosting environments
• High Availability (HA) Clusters: HA clusters are designed to maintain redundant nodes that can serve as backup systems in case of failures They provide consistent computing services for critical functions like business operations, intricate databases, customer services (such as e-commerce websites), and network file distribution Their primary goal is to ensure uninterrupted data availability for customers
Here are the benefits of Cluster Computing:
• Enhanced Performance: These systems deliver superior performance compared to traditional mainframe computer networks
• Manageability: Cluster Computing is easy to manage and implement
• Scalability: Resources can be seamlessly added to the clusters as needed
• Expandability: Computer clusters can be effortlessly expanded by introducing additional computers to the network Cluster computing can integrate extra resources or networks into the existing computer system
• Availability: When one node experiences a failure, the other nodes remain active and act as proxies for the failed node, ensuring improved availability
• Flexibility: It can be upgraded to higher specifications or expanded by adding more nodes
• Costly: Cluster Computing is relatively expensive due to the substantial hardware requirements and design complexities
• Fault Identification Challenges: Identifying the specific component that is faulty can be a challenging task
• Increased Space Requirements: As more servers are necessary for management and monitoring, additional infrastructure space is needed
Distributed computing refers to a framework in which processing and data storage are dispersed across numerous devices or systems, rather than being centralized within a single device Within a distributed system, each device or system possesses its own processing capabilities and may also independently store and manage its data These devices or systems collaborate to execute tasks and share resources, with no single device acting as the central focal point (geeksforgeeks, 2023)
An illustration of a distributed computing system is found in cloud computing, where resources such as computing power, storage, and networking are delivered via the Internet and accessed on-demand In such a system, users can access and utilize shared resources through a web browser or other client software
A Distributed Computing System is characterized by the following features:
• Multiple Devices or Systems: Processing and data storage are distributed across various devices or systems
• Peer-to-Peer Architecture: Devices or systems within the distributed system can serve as both clients and servers, allowing them to request and provide services to other devices or systems in the network
• Shared Resources: Resources like computing power, storage, and networking are shared among the devices or systems in the network
• Horizontal Scaling: Expanding a distributed computing system typically involves adding more devices or systems to the network to increase processing and storage capacity This expansion can be achieved through hardware upgrades or by incorporating additional devices or systems into the network
Advantages of Distributed Computing Systems include:
• Scalability: Distributed systems tend to be more scalable than centralized systems, as they can easily incorporate additional devices or systems to boost processing and storage capacity
• Reliability: Distributed systems often exhibit higher reliability compared to centralized systems, as they can continue functioning even if one device or system experiences a failure
• Flexibility: Distributed systems are typically more flexible than centralized systems, as they can be configured and reconfigured more readily to adapt to changing computational requirements
However, there are some limitations to Distributed Computing Systems:
• Complexity: Distributed systems can be more intricate than centralized systems, as they involve multiple devices or systems that require coordination and management
• Security: Securing a distributed system can be more challenging, as security measures must be implemented on each individual device or system to ensure the overall system's security
• Performance: Distributed systems may not deliver the same level of performance as centralized systems, given that processing and data storage are distributed across multiple devices or systems
Distributed Computing Systems find various applications, such as:
• Cloud Computing: Cloud computing systems, a form of distributed computing, provide resources like computing power, storage, and networking via the internet
• Peer-to-Peer Networks: Peer-to-peer networks, another kind of distributed computing system, enable users to share resources like files and computing capacity among themselves
• Distributed Architectures: Many contemporary computing systems, including microservices architectures, employ distributed architectures to distribute processing and data storage across multiple devices or systems
Example
New technology brings forth fresh possibilities, and this holds equally true for High-Performance Computing (HPC) Major tech companies have swiftly embraced the potential of high-performance cloud systems Consequently, even smaller businesses and individual users can tap into remarkable advancements in AI, analytics, and application development (weka, 2022)
Nevertheless, HPC truly stands out as it caters to substantial research and development organizations spanning various critical sectors High-performance computing is fundamentally altering the business landscape for many companies, ushering in a revolutionary shift in how scientists and engineers conduct research and development
Example about Machine Learning and AI:
Machine learning and AI are not only standalone technologies but also have extensive applications across almost every industry mentioned in this list
The pursuit of machine learning has been a longstanding objective among computer scientists, dating back to the inception of computers However, limitations in both hardware and software have traditionally constrained the possibilities of general AI High-Performance Computing (HPC) has transformed this landscape, and now, many cloud platforms incorporate AI components
How has HPC driven the advancement of AI?
• Enriched Big Data Training Sets: One of the historic constraints of classical machine learning was the scarcity of training data With the emergence of cloud architecture and big data, these algorithms now have access to terabytes of data that engineers can harness to enhance strategic decision-making
• Parallel Processing: AI processing is inherently parallel, as machine learning algorithms continuously analyze vast volumes of data using a stable and repetitive set of computations Hardware-accelerated GPUs have turned parallel processing into a practical reality, fueling cloud platforms capable of efficiently processing extensive data through learning algorithms
• Distributed Applications: The availability of always-on HPC applications has transformed AI into a practical reality Today, even consumers and businesses can engage with AI through purpose-built analytics applications accessible over the internet
Let's set aside discussions concerning consumer-oriented products like online banking and chatbots When it comes to major financial institutions (including mid-sized investment firms), High-Performance Computing (HPC) is ushering in a transformative era High-performance cloud computing is empowering predictive models that play a pivotal role in decision-making related to risk management, investments, and real-time analytics
What does HPC signify for the financial sector?
• Responsiveness: Companies can swiftly and accurately adapt to fluctuations in financial markets and conditions
• Resilience: HPC facilitates the creation of comprehensive system snapshots, the development of redundancy plans, and the establishment of rapid recovery strategies, all of which become practical realities
• Intelligence: The analytical capabilities that HPC offers to decision-makers within financial institutions surpass human capacities for recognizing patterns and trends Consequently, these insights can significantly enhance profitability and stability for virtually any company.
Deployment models
Public
The name itself is self-explanatory: public clouds are accessible to the general public, and data is generated and stored on servers managed by third-party providers
The server infrastructure is owned by these service providers, who oversee and maintain the pooled resources This eliminates the need for user companies to purchase and manage their own hardware Service providers offer resources as a service, either free of charge or on a pay- per-use basis, accessible via the internet Users can adjust the resources to meet their specific needs (Shaptunova, 2023)
Effortless Management of Infrastructure: When a third party manages your cloud infrastructure, it's more convenient as they handle the development and maintenance of the software, saving you from this task The process of setting up and using the infrastructure is straightforward and user-friendly
Questionable Dependability: The same network of servers designed to safeguard against failures can, ironically, be a source of unreliability Instances such as the 2016 Salesforce CRM incident, where a storage collapse led to significant disruptions, highlight the potential for outages and malfunctions in public clouds
Enhanced Scalability: As your business needs grow, you can seamlessly expand the cloud infrastructure's capacity to accommodate these changes
Concerns Over Data Security and Privacy:
While accessing data in a public cloud is straightforward, this deployment model often leaves users in the dark about the exact location of their data and who else might
Page 21 of 59 have access to it, raising security and privacy concerns
Cost Efficiency: With cloud services, you're only billed for the resources you actually use, eliminating the necessity to invest in physical hardware or software
Limitation in Customization: Public cloud service providers typically offer only standardized service options This one-size- fits-all approach can be insufficient for meeting more intricate or specialized business needs
Continuous Availability: The extensive and robust network of servers maintained by your service provider guarantees that your infrastructure remains constantly accessible, leading to improved operational uptime
Real-time use case of public cloud:
In reality, small businesses and systems frequently opt for the utilization of public cloud services
As previously discussed, the primary factors that enhance flexibility in this type of deployment are scalability and reduced maintenance requirements Additionally, the cost-effectiveness of public cloud solutions plays a pivotal role in improving efficiency for small businesses
I have personally employed Public Cloud Services as the deployment platform for several projects Specifically, I utilized Firebase, a widely recognized web and mobile application development platform provided by Google, to deploy a personal "Portfolio" project
Figure 6: A Project deployed onto Firebase
Firebase enjoys extensive popularity in both small and large-scale application development due to its user-friendly nature, scalability, and seamless integration with other Google Cloud services
It simplifies numerous common development tasks and provides a robust infrastructure for creating top-notch web and mobile applications
From an experiential perspective, Firebase offers a flexible deployment process that significantly streamlines tasks With just a few interactions through its user-friendly web console interface, I successfully deployed the Cloud Edition, which entails delivering a website equipped with an online messaging service
Public deployments are known for their remarkable stability, with consistently high uptime The deployment process itself is straightforward, relying on the well-known version control service, Git
Figure 7: Example my portfolio is deployed by Firebase The Result of the Process is a website is pushed on cloud to complete:
Link website: https://myporfolio-434bc.firebaseapp.com/
Private
There is minimal to no distinction from a technical standpoint between a public cloud and a private one, as their architectures are highly similar However, unlike a public cloud, which is accessible to the general public, a private cloud is owned exclusively by a specific company This is why it's also referred to as an internal or corporate model (Shaptunova, 2023)
The server infrastructure can be hosted externally or on the company's premises Irrespective of their physical location, these infrastructures operate within a dedicated private network and utilize software and hardware exclusively intended for the owner company's use
Access to the information stored in a private repository is restricted to a clearly defined set of individuals, preventing general public access In light of numerous security breaches in recent years, an increasing number of large corporations have opted for a closed private cloud model to minimize data security concerns
Compared to the public model, the private cloud offers greater flexibility in customizing the infrastructure to meet the company's specific requirements Private models are particularly suitable for companies looking to safeguard their mission-critical operations or for businesses with constantly evolving needs
Several public cloud service providers, such as Amazon, IBM, Cisco, Dell, and Red Hat, also offer private solutions At SaM Solutions, we've developed an efficient, ready-to-use Platform as a Service called SaM CloudBOX This PaaS streamlines projects with quick and simple deployment, enabling companies to leverage BizDevOps to the fullest
Tailored and adaptable development with excellent scalability, enabling businesses to tailor their infrastructures to meet their specific needs
The primary drawback of the private cloud deployment model lies in its cost, as it necessitates significant expenditure on hardware, software, and employee training This is why this secure and adaptable computing deployment model may not be suitable for small businesses
Strong security, privacy, and dependability, ensuring that only authorized individuals can access resources
Real-time use case of private cloud:
Figure 9: Example about private cloud
The State Bank of India (SBI) initiated a substantial IT transformation in 2012 with the launch of its private cloud solution, "MeghDoot" This move was aimed at addressing the growing need for fast, reliable, and secure services in the banking sector, particularly in the area of payments MeghDoot, recognized as one of India's most robust private clouds, comprises around 7,500 virtual machines (VMs) It supports a wide array of technologies and applications related to financial services, ensuring high availability and scalability while adhering to security and regulatory compliance standards (Haritas, 2020)
SBI's adoption of MeghDoot was part of a broader strategy to enhance customer experience and maintain a competitive edge in the banking industry This step reflects the bank's proactive approach in recognizing and adapting to changing market trends and evolving customer expectations The bank's focus on digital transformation is evident in various initiatives, such as simplifying digital experiences for customers and optimizing branch processes with technology One such example is the development of the 'No Queue' app, which allows customers to book a virtual queue ticket (e-Token) for select services at select SBI branches, enhancing the customer experience by reducing waiting times and providing real-time updates (Srikanth, 2017)
MeghDoot's implementation at SBI exemplifies the strategic importance of cloud computing in modern banking, where agility and rapid service deployment are crucial The private cloud infrastructure has enabled the bank to significantly reduce the time required for procuring new hardware, thus accelerating the launch of new business services Additionally, the cloud infrastructure has been instrumental in managing peak business requirements, such as performance testing during high traffic periods without impacting the production environment
In summary, SBI's transition to a private cloud solution with MeghDoot has been a key factor in its digital transformation journey, allowing it to stay relevant and competitive in a rapidly evolving digital banking landscape This strategic move demonstrates the bank's commitment to leveraging technology for improved efficiency, customer satisfaction, and overall operational excellence in the banking sector
Community Cloud
A community deployment model closely resembles the private cloud model, with the main distinction being the user base While a private cloud server is owned by a single company, a community cloud is shared among multiple organizations with similar backgrounds, who utilize the infrastructure and associated resources (Shaptunova, 2023)
When all participating organizations have consistent security, privacy, and performance criteria, this multi-tenant data center architecture enables these companies to improve their operational efficiency, particularly in collaborative projects A centralized cloud environment streamlines project development, management, and execution, with the cost burden being distributed among all users
Cost savings Higher cost compared to the public deployment model Enhanced security, privacy, and reliability Shared fixed storage and bandwidth capacity Facilitated data sharing and collaboration Limited adoption at present
Real-time use case of community cloud:
Figure 11: Example about community cloud
The "Health Data Compass" project is a typical example of using the community cloud computing model to manage and share medical data in the healthcare field This project has been implemented in the state of Colorado, USA, and below are some detailed information about how it works: (healthdata, 2022)
Health Data Compass Goal: Health Data Compass was created with the goal of creating a secure, flexible and efficient medical data sharing platform between healthcare organizations, including hospitals, clinics, and medical research organizations The goal is to leverage medical data to improve treatment, medical research, and public health management
Use Community Cloud: Health Data Compass uses a community cloud computing model to store and manage health data In this model, healthcare and research organizations can access and share their data securely on the same platform This helps reduce data dispersion and facilitates large- scale data research and analysis
Diverse data: Health Data Compass allows storing and sharing a variety of medical data, including electronic medical record data, X-ray images, drug data, and medical research This helps researchers and medical professionals access diverse information to support clinical and research decisions
Security and Compliance: One of the key elements of Health Data Compass is ensuring security and compliance with health data protection regulations such as HIPAA Medical data is encrypted and strictly controlled to ensure patient privacy and safety
Supporting the research process: Health Data Compass provides health researchers with tools and resources to conduct research projects and analyze health data This improves research efficiency and reduces the time needed to search and access data
With the combination of community cloud computing and clinical care, Health Data Compass has created a powerful platform to improve health management, medical research and patient care in the state Colorado and beyond.
Hybrid Cloud
As is often the case with hybrid solutions, a hybrid cloud incorporates the most advantageous attributes of the deployment models mentioned earlier (public, private, and community) It enables companies to selectively combine elements from these three types that align best with their needs (Shaptunova, 2023)
For instance, a company can distribute its workload by placing critical tasks on a secure private cloud and less sensitive ones on a public cloud The hybrid cloud deployment model not only ensures the protection and management of mission-critical assets but does so in a cost-effective and resource-efficient manner Moreover, this approach simplifies the transfer of data and applications
Flexibility: This cloud type offers exceptional flexibility, allowing you to choose the most suitable aspects from each cloud type and integrate them into your solution
Cost: While hybrid clouds are not inherently more expensive than other cloud types, there is a risk of overspending if you don't carefully select the right services
Scalability: You are not restricted to a single platform or its limitations, enabling you to scale according to user demand
Data Segregation: When using a combination of public and private services, it's essential to ensure that all your data is appropriately segregated This can increase the security, compliance, and auditing requirements for your business
Real-time use case of hybrid cloud:
Figure 13: Cisco and Google hybrid cloud
To enable Cisco's hybrid cloud solution with Google Cloud, applications are distributed across data centers provided by Cisco and the Google Cloud Platform This setup is managed using Kubernetes and Istio Kubernetes oversees the management of cloud containers, while Istio handles the connection, management, and security of microservices used to build isolated applications
The Cisco platform employs VPN management to establish a secure and efficient system that operates seamlessly with both on-premises servers and cloud resources By working in conjunction with Google VPC traffic records and local conditions, Stealth Watch Cloud ensures active security monitoring and chain discovery As a result, technicians can focus primarily on the architecture of applications
Figure 14: Example of Hybrid Cloud (Cisco)
When combined with the on-premises Kubernetes Administration Directory, Cisco Cloud Center, the open service broker, and Istio collectively provide security for horizontally scaled microservices, whether they are hosted on-premises or on Google Cloud The ability to seamlessly integrate on- premises applications and data with the open cloud represents a significant advancement in Cisco's hybrid cloud platform for Google Cloud.
Cloud service models
Software as a Service (SaaS)
Software as a Service (SaaS) is a web-based delivery approach that allows users to access software via a web browser With SaaS, users do not need to be concerned about the software's hosting location, underlying operating system, or the programming language it is coded in This type of software can be accessed from any device connected to the internet
SaaS ensures that users always have access to the latest software version, as maintenance and support are managed by the SaaS provider In this model, users do not have control over the underlying infrastructure, including aspects like storage and processing power
Figure 16: SaaS Services (Peterson, 2023) The characteristics of SaaS include:
• Absence of hardware and software update responsibilities for SaaS users
• Services are acquired based on a pay-as-you-go model
Quick setup, allowing for instant use Integration capabilities are determined by the provider, making it challenging to customize integrations on your own
Cost-effectiveness compared to on-premises software
Potential for incompatibility with existing tools and hardware in your business
Software management and updates are typically included in the subscription or purchase
Data security relies on the SaaS company's measures, posing a risk if any security breaches occur
Doesn't consume local resources like hard disk space
Offers a wide range of hosted capabilities and services as a cloud computing category
Facilitates easy development and deployment of web-based applications
Platform as a Service (PaaS)
Platform-as-a-Service (PaaS) offers a cloud computing framework for the development and deployment of software applications It serves as a platform for managing and deploying software applications, with the advantage of automatic scalability based on demand PaaS takes care of server management, storage, and networking, allowing developers to focus solely on the application development aspect Additionally, it provides a runtime environment and deployment tools for application development (Peterson, 2023)
This model offers the necessary resources to support the intricate process of creating and delivering web applications and services exclusively over the internet This cloud computing approach empowers developers to quickly create, operate, and oversee their applications without the need to construct and uphold the underlying infrastructure or platform
PaaS exhibits the following features:
• Leverages virtualization technology, allowing for flexible scaling of computing resources to align with the organization's requirements, whether it be scaling up or down automatically (Auto-scale)
• Provides support for a variety of programming languages and frameworks
• Seamlessly integrates with web services and databases
Streamlined and cost-effective app development and deployment
You have control over the app's code but not its infrastructure
Allows customization of SaaS apps without the burden of software maintenance
Data storage by the PaaS provider can pose security risks for your app's users
Supports the automation of business policies Vendors offer varying service levels, necessitating careful service selection
Simplifies migration to the Hybrid Model Vendor lock-in risks may impact the ecosystem required for your development environment
Enables developers to create applications without dealing with the underlying operating system or cloud infrastructure
Grants developers the freedom to concentrate on app design, while the platform manages language and database aspects
Facilitates collaboration among developers working on the same app.
Software as a Service (IaaS)
Infrastructure-as-a-Service (IaaS) is a cloud computing service that offers computing, storage, and networking resources available on-demand Typically, this operates on a pay-as-you-go basis Instead of purchasing hardware outright, organizations can procure resources as needed and on- demand (Peterson, 2023)
The IaaS cloud provider manages the infrastructure elements, encompassing the on-premises data center, servers, storage, networking hardware, and the virtualization layer (hypervisor)
This model encompasses the fundamental components for your web application, granting you full authority over the hardware that supports your application, including storage, servers, virtual machines, networks, and operating systems The IaaS model provides exceptional flexibility and management control over your IT resources
The characteristics of IaaS include:
• A dynamic and adaptable Cloud Service Model
• Access via GUI (Graphical User Interface) and API (Application Programming Interface)
Simplified automation of storage, networking, and server deployment
Responsibility for ensuring the proper functioning and security of apps and operating systems rests with the user
Hardware procurement can be based on consumption patterns
Data management falls under the user's purview, including data recovery in case of loss
Clients retain complete control over their underlying infrastructure
IaaS providers offer servers and APIs, requiring users to configure all other components themselves
Providers have the capability to deploy resources to a customer's environment as needed
Scalability to accommodate changing requirements.
Comparing Service Models
Stands for Software as a service Platform as a service Infrastructure as a service
Uses SAAS is used by the end user
PAAS is used by developers
IAAS is used by network architects
Access SAAS gives access to the end user
PAAS gives access to run time environment to deployment and development tools for application
IAAS gives access to the resources like virtual machines and virtual storage
Model It is a service model in cloud computing that hosts software to make it available to clients
It is a cloud computing model that delivers tools that are used for the development of applications
It is a service model that provides virtualized computing resources over the internet
There is no requirement about technicalities company handles everything
Some knowledge is required for the basic setup
Popularity It is popular among consumers and companies, such as file sharing, email, and networking
It is popular among developers who focus on the development of apps and scripts
It is popular among developers and researchers
Cloud server MS Office web,
Facebook, and Google search engine
Amazon Web Services, sun, vCloud Express
Enterprise services IBM cloud analysis Microsoft Azure AWS virtual private cloud
User Controls Nothing Data of the application
Characteristic of cloud
We will explore various attributes of cloud computing and exercise caution when choosing a cloud service for our organization It is crucial to consider certain distinctive features, with our primary focus being on discussing the five fundamental characteristics of cloud computing (AnkitMahali, 2023)
Figure 19: Characteristics of Cloud Computing (AnkitMahali, 2023)
• On-Demand Self-Service: On-demand self-services like email, mobile apps, network, or server resources can be accessed without the need for human intervention to deliver these services
• Broad Network Accessibility: Users can access these services from a variety of devices such as mobile phones, tablets, laptops, and workstations
• Resource Pooling: Multiple users can efficiently utilize both physical and virtual resources that are dynamically allocated and reallocated based on their specific needs
• Rapid Scalability: Computing capabilities can be automatically provisioned to users based on their requirements, allowing for quick and flexible resource allocation
• Usage-Based Billing: Services are billed based on actual usage, similar to data plans for mobile phones Users pay only for the resources they consume, whether it's a limited plan or an unlimited plan
• Multi-Tenant Architecture: A shared set of cloud resources is made available to multiple users within an organization, each with their own set of permissions and access rights.
Virtualization and multicore
Virtualization
Virtualization is a method that separates a service from its physical implementation It involves creating a virtual version of something, typically computer hardware, and originated in the mainframe computer era This technique uses specialized software to simulate a computing resource instead of using the actual resource Virtualization allows multiple operating systems and applications to run concurrently on a single machine, enhancing hardware utilization and flexibility (Bisht, 2023)
In simpler terms, virtualization is a key strategy for cost reduction, hardware savings, and energy efficiency in cloud computing It enables the sharing of a single physical resource or application across various customers and organizations simultaneously This is achieved by assigning a logical name to physical storage and linking it to the physical resource as needed Virtualization, often equated with hardware virtualization, is crucial in providing efficient Infrastructure-as-a-Service (IaaS) for cloud computing Additionally, virtualization technologies create a virtual environment that supports not just application execution but also storage, memory, and network functions
• Enhanced flexibility and efficiency in resource allocation
• Offers remote access and quick scalability
• Provides high availability and facilitates disaster recovery
• Implements a pay-per-use model for IT infrastructure
• Supports running multiple operating systems concurrently
• Significant Initial Cost: Adopting cloud services requires a substantial initial investment, but it can eventually lower company costs
• Need for Specialized Skills: Transitioning from traditional servers to cloud-based systems demands a workforce with cloud-specific skills This may require hiring new staff or training existing employees
• Data Security Risks: Storing data on third-party platforms increases the risk of cyber attacks, making it more vulnerable to hackers and other security threats.
Multicore
A multi-core CPU is a computer processor with two or more sections, each functioning as if it were a separate computer Although these cores are integrated on a single chip, they operate independently, resembling each other in design This design allows several relatively autonomous cores to work together efficiently For instance, a dual-core processor has two independent chips, while a quad-core processor includes four independent microprocessors The name of the processor reflects the number of microchips it contains (geeksforgeeks, 2023)
In a multi-core processor, each core handles different tasks For example, if you're using WhatsApp on a mobile device, one core might manage the app while other cores perform different tasks, like downloading a document This is akin to comparing a person with one hand to someone with two; the latter can perform more tasks simultaneously Similarly, a multi-core processor can handle more tasks than a single-core processor
The effectiveness of a multi-core processor also depends on the operating system Some operating systems might not fully utilize the capabilities of multi-core processors, especially if they require more power For instance, a high-speed processor will use more power, leading to increased battery consumption in a laptop When running high-graphics games, the need for more processing power, and hence more power, can cause the laptop's battery to drain faster
• Enhanced Work Completion: Multicore processors outperform single-core processors in completing tasks
• Superior for Multithreading: They excel in handling multithreading applications
• Efficient Low-Frequency Operations: Capable of performing simultaneous tasks at lower frequencies
• Greater Data Handling: They can process more data compared to single-core processors
• Energy Efficiency: Multicore processors consume less energy for the same workload than single-core processors
• Simultaneous Complex Tasks: They enable performing complex tasks, like virus scanning and watching a movie, at the same time
• Management Challenges: More complex to manage compared to single-core processors
• Higher Cost: Generally more expensive than single-core processors
• Limited Speed Increase: The speed increase is not directly proportional to the number of cores
• Performance Dependency: Their performance is highly dependent on user operations
• Higher Power Consumption: Tend to consume more power, especially under heavy workload.
Solution for ATN
Overview Scenario
ATN is a Vietnamese company which is selling toys to teenagers in many provinces all over Vietnam The company has a revenue of over 700.000 dollars/year Currently, each shop has its own database to store transactions for that shop only Each shop has to send the sale data to the board director monthly and the board director needs lots of time to summarize the data collected from all the shops Besides, the board can’t see the stock information update in real-time.
Overview Solution
The given scenario clearly indicates that the business needs to distribute its products across multiple branches instead of relying on a single one Additionally, they require a system to track monthly revenues for each store and maintain transaction records for all branches It can be concluded that utilizing cloud data solutions will effectively address these challenges The following reasons support the suitability of the proposed solution for businesses:
By adopting cloud data solutions, ATN businesses can eliminate the need to invest in and manage their own servers, as well as hire additional staff for server maintenance and operation Instead, they can simply pay a modest monthly fee to rent server space and access the necessary computing power, resulting in significant cost savings Servers are not only expensive to maintain but also require ongoing operational expenses
ATN Businesses can reduce expenses related to software, hardware, and installation time by leveraging cloud computing services This approach streamlines the process of locating and converting data, saving considerable time and effort Moreover, those businesses can choose from various service packages that align with their budgets, allowing them to adjust their package as needed without worrying about capacity or configuration issues when upgrading
Storing data at multiple branches becomes more efficient and convenient when each branch can access real-time inventory information and easily search for products at other locations via an internet connection This simplifies the addition of new products and expands product offerings Additionally, ATN businesses can remotely monitor the performance and operations of each store branch and oversee their staff and retail operations from a distance The cloud-based storage system ensures data backup, mitigating concerns about data loss due to natural disasters or the need for data recovery
Security is a paramount concern for any business Cloud computing service providers guarantee the security of ATN company's information, offering upgraded security measures beyond standard server setups
In summary, adopting cloud data solutions offers multiple benefits for ATN company, including cost savings, streamlined operations, enhanced efficiency, remote management capabilities, and robust security measures
Deployment Model
The Public Cloud deployment model is an ideal match for this business case Here’s why it's the best choice for ATN to deploy its cloud infrastructure:
Efficiency: By shifting each store's database to the public cloud, ATN gains a more efficient data storage solution This allows any end user, including store managers and board directors, to access data instantly via the Internet Managed by a third-party organization, the public cloud relieves ATN of the burden of maintaining and upgrading physical database infrastructure
Scalability: The public cloud enables ATN to automatically and dynamically adjust resources in response to evolving business needs and market conditions This flexibility allows for scaling applications, users, and workloads, fostering agility and competitive advantage
Simplified Disaster Recovery: ATN can effortlessly back up data, applications, and resources across global public cloud regions with a few clicks Unlike traditional on-premises disaster
Page 39 of 59 recovery, there’s no need for maintaining equipment in multiple data centers, reducing staffing and infrastructure costs Public cloud providers offer automated, scalable disaster recovery services, providing redundancy and geographic diversity for business continuity and compliance
Cost Savings: With an annual income of around 700,000 USD, it's crucial to minimize costs
Moving to the public cloud can significantly reduce expenses related to hardware, storage, networking, software licenses, and more Pay-as-you-go packages mean companies pay only for what they use
Staying Up-to-Date: Currently, each store maintains its own database for transactions This setup requires the board director to spend considerable time summarizing monthly data from all shops, and they cannot see stock updates in real time Deploying applications on the public cloud, as opposed to on-premises, eases the burden of updates and maintenance The public cloud offers the latest in operating systems, applications, and services, including advancements in security, cloud-native development, application monitoring, AI, big data analytics, and more.
Service Model
The three forms of cloud computing services – Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) – each offer distinct benefits and drawbacks In this discussion, I suggest opting for PaaS, particularly beneficial for small businesses like ATN, which have leveraged it effectively in recent times PaaS stands out for its ability to offer a comprehensive development environment for ongoing and established cloud applications, unlike SaaS that primarily hosts pre-developed applications It allows ATN to build and deploy software without needing to estimate the required memory or CPU resources Throughout all software development phases, from design to maintenance, ATN gains from PaaS's advantages
One significant benefit of PaaS is its capability to centralize data, which has helped ATN consolidate its databases across stores, eliminating the need for separate databases and manual data reporting PaaS also enables real-time data access, aiding in swift, data-driven decisions Its automated reporting tools streamline the process of gathering and summarizing sales data, replacing tedious manual methods
PaaS scales with business growth, accommodating more data and transactions without substantial new hardware investments It offers customization, allowing ATN to adjust the platform according to evolving business requirements
Financially, PaaS reduces costs related to server space, programming software, security, and ongoing maintenance It automates traditional manual tasks, supporting the entire development
Page 40 of 59 cycle, and eliminates the need for hefty upfront investments This subscription-based model lowers
IT expenses like maintenance and upgrades, which are managed by the service provider, leading to significant savings Therefore, PaaS is particularly suitable for application-sharing among ATN's store branches, benefiting from pre-built infrastructure and regular maintenance
Cloud services include automatic data backups, minimizing data loss risks due to various factors They also offer quick data recovery options, ensuring business continuity Providers invest in advanced security measures such as encryption and firewalls, complying with international standards for an additional security layer
PaaS can seamlessly integrate with existing business applications, fostering a unified technology ecosystem This integration ensures data consistency, enhances accuracy, and reduces manual entry Its remote access capabilities facilitate global collaboration, making it ideal for companies like ATN with teams in various locations This integration supports efficient global processes and real-time collaboration across ATN’s network
By delegating technical aspects like data management and infrastructure maintenance to the cloud provider, ATN can concentrate more on core activities like marketing and product development, enabling greater flexibility and innovation in market and customer response
In summary, adopting a cloud PaaS service for ATN will not only address current inefficiencies in data management and reporting but will also position the company to grow more efficient and scalable wide in the future.
Technical Specs
Node.js, an open-source platform, enables server-side execution of JavaScript code It's particularly suited for applications requiring a continuous connection between the browser and server, making it ideal for real-time applications like chat, news feeds, and web push notifications (Cathy, 2023)
Node.js is designed to operate on a dedicated HTTP server, functioning on a single thread for one process at a time Its applications are event-driven and operate asynchronously Unlike the conventional model of processing requests in a sequence of receive, process, send, wait, and receive, Node.js handles incoming requests in a continuous event loop, dispatching small requests successively without waiting for responses
This approach differs significantly from traditional models that handle larger, more complex processes in multiple concurrent threads, each waiting for its corresponding response before proceeding
One of Node.js's key benefits, as highlighted by its creator Ryan Dahl, is its non-blocking nature for input/output (I/O) operations However, some developers critique Node.js, noting that if a process consumes substantial CPU cycles, it could block and potentially crash the application Proponents of Node.js argue that CPU processing time is less concerning due to the multitude of small processes that Node.js code typically involves b The reason why we should choose Node.js
Node.js is my chosen programming language for several reasons, making it an ideal choice for our projects:
• Easy Scalability: Node.js is highly scalable, simplifying the expansion of applications both horizontally (by adding more nodes to a system) and vertically This makes it suitable for ATN company's growth and the development of large, modern programs
• Ease of Learning: JavaScript's popularity makes Node.js accessible, as many front- end developers are already familiar with it This ease of use facilitates quicker team formation and collaboration in development
• Unified Programming Language: Node.js allows for writing both client-side and server-side applications in JavaScript, eliminating the need for other server-side languages This uniformity simplifies web application deployment and is compatible with various web browsers, aiding in cost-effective development and deployment across multiple platforms and operating systems
• High Performance: Leveraging Google's V8 JavaScript engine, Node.js efficiently translates JavaScript code to machine code, enhancing execution speed Its support for non-blocking I/O operations also accelerates data processing, crucial for the high-speed update requirements of a sales store
• Caching Advantage: Node.js's runtime environment facilitates efficient module caching Once a module is requested, it's stored in the application's memory, speeding up web page loading and response times, an essential feature for managing operating costs in small and medium-sized store systems
• Development Freedom: Unlike environments like Ruby on Rails, which impose certain guidelines, Node.js offers more freedom in application development Developers can start from scratch, allowing for flexible and standard-compliant system construction tailored to ATN company's needs
• Concurrent Request Handling: Node.js's non-blocking I/O system efficiently manages multiple concurrent requests, outperforming systems like Ruby or Python in handling simultaneous operations This ensures systematic and rapid processing of incoming requests, essential for smooth and efficient system operation
• High Extensibility: Node.js is highly customizable, allowing for extensive personalization to meet specific requirements With built-in APIs for creating servers (HTTP, TCP, DNS, etc.) and support for JSON data exchange, it enables ATN company to easily maintain and upgrade its systems
5.1.2 Laravel & PHP a Definition Laravel is a free and open-source PHP framework that is both powerful and straightforward It employs a model-view-controller (MVC) architectural pattern By leveraging the components from various existing frameworks, Laravel facilitates the creation of web applications that are both well-organized and effective (tutorialspoin,
This framework provides an extensive range of features that amalgamate the essential aspects of PHP frameworks like CodeIgniter, Yii, and elements from other programming languages such as Ruby on Rails Laravel's comprehensive feature set greatly enhances the efficiency of web development
For those already acquainted with Core PHP and Advanced PHP, Laravel simplifies the development process It significantly reduces the time required to build a website from the ground up Additionally, websites developed using Laravel are secure and offer protection against numerous web-based threats b The reason why we should choose Laravel:
In the scenario you've described for ATN, a Vietnamese company selling toys with shops in various provinces and facing challenges in data consolidation and real-time stock visibility, using Laravel as the framework for developing a centralized web application could offer several benefits:
Cloud Architecture
The dynamic scalability architecture is a model that relies on predefined scaling criteria to facilitate the automatic distribution of IT resources from available pools This dynamic allocation adjusts to the fluctuations in consumption demand, ensuring IT resources are efficiently reallocated without the need for manual oversight, allowing for variable usage patterns (Ingale, 2022)
Based on the above image, we can describe an example of Cloud architecture:
• Cloud Service Consumers: These are the clients or end-users that use the cloud services
They could be individuals, applications, or other services that make requests over the internet to access the cloud service
• Automated Scaling Listener: This is a monitoring component that watches the traffic and the load on the cloud service It listens for the volume of requests coming from the cloud service consumers
• Cloud Service Instances: These are the actual running instances of the cloud service that process the requests from the consumers They are scalable and can increase or decrease in number according to the load
• Virtual Server Host: This represents the physical servers within the cloud provider's data center These hosts run virtual machines that, in turn, run the cloud service instances The flow goes as follows: o The cloud service consumers send requests to the cloud service (indicated by the arrow labeled '1') o The automated scaling listener monitors these requests and the load on the service instances (indicated by '2') o If the listener detects that the load is too high and is exceeding predefined capacity thresholds, it triggers the creation of more cloud service instances to handle the increased demand o These instances are provisioned on virtual servers that reside on the virtual server host
This architecture allows for scaling the cloud service's capacity up or down based on real-time demand, ensuring that the service remains responsive and available even as the number of requests varies This is a key feature of cloud computing known as "elasticity," which provides flexibility and cost-efficiency for cloud service providers and consumers
In the assignment, the writer explored several cloud computing ideas, touching upon service models, deployment strategies, and the five critical elements of cloud computing The examination included both P2P and client-server network architectures, as well as associated technologies such as hypervisors and virtual machines Subsequent parts provide a historical perspective on cloud computing, tracing its evolution to current times Towards the end of the document, a solution tailored for the ATN company is proposed, incorporating technologies that align with ATN's specific requirements, including a deployment strategy, service model, service provider, database, programming language, among others
References amela, 2021 Why Mongo DB should be used in your application [Online]
Available at: https://amela.vn/cung-amela-tim-hieu-ve-mongo-db/
AnkitMahali, 2023 Characteristics of Cloud Computing [Online]
Available at: https://www.geeksforgeeks.org/characteristics-of-cloud-computing/
BANGER, E R S., 2023 What is Peer to Peer Network? Architecture, Types, & Examples!! [Online] Available at: https://digitalthinkerhelp.com/what-is-peer-to-peer-p2p-network-with-architecture-types- examples/
Bisht, N., 2023 Virtualization in Cloud Computing and Types [Online]
Available at: https://www.geeksforgeeks.org/virtualization-cloud-computing-types/?ref=header_search [Accessed 08 12 2023]
Cathy, 2023 Why develop with Node.js ? [Online]
Available at: https://www.bocasay.com/project-develop-nodejs/
Available at: https://www.techtarget.com/searchnetworking/definition/client-server
Foote, K D., 2021 A Brief History of Cloud Computing [Online]
Available at: https://www.dataversity.net/brief-history-cloud-computing/
[Accessed 04 12 2023] geeksforgeeks, 2023 Advantages and Disadvantages of Multicore Processors [Online]
Available at: https://www.geeksforgeeks.org/advantages-and-disadvantages-of-multicore- processors/?ref=header_search
[Accessed 08 12 2023] geeksforgeeks, 2023 What is Distributed Computing? [Online]
Available at: https://www.geeksforgeeks.org/what-is-distributed-computing/?ref=header_search [Accessed 06 12 2023]
Haritas, B., 2020 How SBI manages spike in digital traffic [Online]
Available at: https://cio.economictimes.indiatimes.com/news/strategy-and-management/how-sbi-
Page 58 of 59 manages-spike-in-digital-traffic/77211566#:~:text=,the%20private%20cloud%20is%20concerned [Accessed 06 12 2023]
Hayes, A., 2021 Peer-to-Peer (P2P) Service: Definition, Facts, and Examples [Online]
Available at: https://www.investopedia.com/terms/p/peertopeer-p2p-service.asp
[Accessed 06 12 2023] healthdata, 2022 Health data compass [Online]
Available at: https://www.healthdatacompass.org/
[Accessed 06 12 2023] heavy, 2022 Client-Server Definition [Online]
Available at: https://www.heavy.ai/technical-glossary/client-server
[Accessed 06 12 2023] heroku, 2023 What is Heroku? [Online]
Available at: https://www.heroku.com/about
Available at: https://technophileholmes.hashnode.dev/cloud-architectures
Joshjnunez, 2020 The Client-Server Relationship [Online]
Available at: https://medium.com/@joshjnunez09/the-client-server-relationship-9ac90fadb3d2 [Accessed 06 12 2023]
Matrix, O., 2019 Last Week Tonight Implements MatrixStore To Manage Growing Archive [Online] Available at: https://www.linkedin.com/pulse/last-week-tonight-implements-matrixstore-manage- growing-object-matrix?trk=portfolio_article-card_title
[Accessed 06 12 2023] netapp, 2023 WHAT IS HIGH PERFORMANCE COMPUTING? [Online]
Available at: https://www.netapp.com/data-storage/high-performance-computing/what-is-hpc/ [Accessed 06 12 2023]
Peterson, R., 2023 Cloud Service Models [Online]
Available at: https://www.guru99.com/cloud-service-models.html
Page 59 of 59 render, 2023 Render server [Online]
Available at: https://render.com/about
Shaptunova, Y., 2023 4 Best Cloud Deployment Models Overview [Online]
Available at: https://www.sam-solutions.com/blog/four-best-cloud-deployment-models-you-need-to- know/
[Accessed 06 12 2023] shubhikagarg, 2021 An Overview of Cluster Computing [Online]
Available at: https://www.geeksforgeeks.org/an-overview-of-cluster-computing/?ref=header_search [Accessed 06 12 2023] simitech, 2023 What Is MySQL Database Definition, History, And Features [Online]
Available at: https://simitech.in/what-is-mysql/
Srikanth, R., 2017 SBI: The rise of the digital bank [Online]
Available at: https://www.expresscomputer.in/news/sbi-the-rise-of-the-digital-bank/20196/
[Accessed 07 12 2023] syedmodassirali, 2022 Client-Server Model [Online]
Available at: https://www.geeksforgeeks.org/client-server-model/
[Accessed 06 12 2023] tutorialspoin, 2023 Laravel - Overview [Online]
Available at: https://www.tutorialspoint.com/laravel/laravel_overview.htm
[Accessed 08 12 2023] universedecoder, 2021 Introduction to Parallel Computing [Online]
Available at: https://www.geeksforgeeks.org/introduction-to-parallel-computing/
[Accessed 06 12 2023] weka, 2022 HPC Applications & Real World Examples [Online]
Available at: https://www.weka.io/learn/hpc/hpc-applications/
Wesley Chai, Stephen J Bigelow, 2022 cloud computing [Online]
Available at: https://www.techtarget.com/searchcloudcomputing/definition/cloud-computing
Powered by TCPDF (www.tcpdf.org)