Data exchange and security in DR
DR may be carried out using several data interchange architectures that
employ various protocols and security methods. Many approaches rely on
point-to-point (PPP) designs based on HTTP [6- 8,11,13,15,18,40] or
other protocols [12]. In this type of architecture, exchanging data
needs knowing ahead of time which endpoints to share data with, which
normally necessitates discovery capabilities and characterizing the
systems in order to make them discoverable. There are various security
mechanisms for point-to-point architectures, which impedes
interoperability across systems that must know not just the endpoints,
but also the security mechanisms that they implement and then support.
Some DR systems rely on publish/subscribe (Pub/Sub) architectures, which
are implemented with MQTT, to tackle the discovery problem. Data is
exchanged through a broker in these systems under a topic where a client
may post data and others can subscribe. MQTT, on the other hand, has
known security concerns that may limit its applicability in real-world
circumstances where security and privacy are critical.
Finally, additional disaster recovery systems depend on peer-to-peer
(P2P) or edge-cloud designs that use a range of protocols. Despite the
fact that there are several security techniques accessible for these
architectures, none of the previous ideas based on edge-cloud address
which is preferable [5,9,10]. Only SHAR-Q, which is based on
peer-to-peer communication, defines the use of SASL in an XMPP cloud. It
is worth noting that these designs do not have the discovery concerns
that the point-to-point architecture has.
When transferring data, the CIM provides two degrees of protection. JWT
tokens are used for communication between the CIM and the local
infrastructure, although any authentication method might be used. CIMs
communicate with one another in a peer-to-peer architecture using an
XMPP cloud. To connect to the cloud, a CIM requires a set of credentials
in the form of a certificate (SASL); moreover, the CIMs utilize a
distinct certificate to encrypt communications (TLS). In addition, the
CIMs use a white access control list system, which requires nodes from
the XMPP network to be defined in order to communicate data.
Semantic interoperability and data validation in DR
Interoperability is defined as two information systems’ capacity to
share and consume data in a transparent manner [41]. This
interoperability is known as semantic interoperability when the data
being shared is represented using Semantic Web technologies. To that
goal, semantic interoperable systems agree on the use of RDF data
presented according to a specified ontology. One of the primary benefits
of adopting ontologies is that RDF data may be consumed by systems that
rely on distinct ontologies, as long as these systems follow a set of
equivalence criteria between these ontologies [20]. Instead, when a
system is not RDF-based, a data translation is required to translate
from a heterogeneous format and model into a semantic interoperable
version (uplift); and vice versa, in order for the system to receive a
semantic interoperable payload and translate it into an understandable
format and model (downlift).
There are various DR approaches that do not rely on ontologies and so do
not provide semantic interoperability. Hossein et al. [5]
concentrate on the protocol plane and how their solution outperforms
previous DR protocols in terms of data exchange performance. Their
concept is not tied to any certain data format or paradigm.
The proposal by Wang et al. [6] focuses on executing DR in virtual
machines hosted on cloud services such as Amazon. Their approach
examines demand response needs at tenants’ infrastructure and, as a
consequence, reduces the number of virtual machines needed. Although
their proposal contains tenant infrastructures that can be extremely
varied, it is built on a bespoke model for data interchange, therefore
integrating new infrastructures needs a developer to convert them to be
compatible with the proposal model.
Deng et al. [7] offer a cloud-based method to maximize profitability
in a tailored DR system. Their proposal specifies a specific structure
and model for the algorithm to use while performing computations on the
cloud. As a consequence, orders are delivered to the client’s location.
Chen et al. [8] propose a cloud DR system for electric cars that is
based on bespoke DR signals that adhere to a certain model and format.
Kaur et al. [9] describe a similar DR method for electric cars. The
concept also makes use of a proprietary data model and data format for
DR signals, which are utilized to communicate with cars and other
stakeholders.
Zhang et al. [10] offer a method for DR training and application of
a reinforcement learning algorithm. To that purpose, the authors choose
for an edge-cloud architecture, however they do not identify the
protocol employed. The edge nodes in this architecture supply data from
various sensors, and the algorithm is taught and employed at the cloud
level. The data is presented in a tabular style, and the model was
created on the fly for this project.
The DR system proposed by Frincu and Draghici [11] is based on cloud
services that gather data from certain smart home sensors.
These sensors provide data to the cloud, where it is stored as a tuple,
with each location indicating the measurement of a distinct sensor. The
DR activities are then calculated on the cloud level, and commands are
delivered back to the smart home actuators.
Galkin et al. [12] offer an architecture for protocol layer
interoperability. Their idea focuses on modifying communication
depending on several protocols (IEC 61850 GOOSE, OpenADR, OCPP and UDP).
However, the authors consider how to expand their idea to create an
automated translation layer to adjust heterogeneous data at the
aggregator level, but they do not give such automatic translation tools.
Finally, Kim et al. [16] present a thorough examination of the
advantages of employing publish/subscribe and topic-based group designs
in DR rather than master-slave architectures. Their approach does not
emphasize the use of a single model or format, nor does it address
compatibility.
It should be noted that the aforementioned approaches must deal with
heterogeneous systems for which no interoperability method is provided.
Furthermore, the fact that the majority of these approaches establish a
specific data model or even employ a custom format severely limits the
interoperability of these DR systems when incorporating new
infrastructures as data sources or interfacing with other current DR
systems. On the contrary, several DR systems have embraced ontologies
and standards, which facilitates interoperability with other systems or
infrastructures that use the same ontologies or standards.
Zhou et al. [15] propose an ontology-based DR system for electric
cars. In this system, a set of existing systems provide data that is
compliant with the ontology; if new systems are included, they must
natively
support data expressed in the custom ontology; i.e., the system does not
provide generic mechanisms for translating heterogeneous data into
semantically interoperable data (uplift).
Similarly, MAS2TERING [33] provides a semantically compatible DR
system with other systems based on various ontologies. The MAS2TERING
system is built on the MAS2TERING ontology [37], which incorporates
many standard ontologies to enable interoperability. MAS2TERING, on the
other hand, lacks tools for dealing with diverse data (uplift).
COSSMic [13] provides a DR system that merges smart house
consumption data with meteorological data, both in CSV format. The
proposal proposes an ad hoc system for translating these data files into
RDF, which is then published on the Web for consumption.
Similarly, RESPOND [14] uses an ad hoc approach to convert data from
many data sources into RDF since these sources are known ahead of time.
The data is then kept in a third service, where tools and services are
planned to give measurement-driven recommendations to end users for
energy demand reduction and impact their behavior. Furthermore, end
users and stakeholders are constantly informed via a mobile app
[42].
Finally, two DR systems provide an uplift mechanism based on adapters
that execute heterogeneous data source translation into semantically
interoperable data. Wicaksono et al. [18] combine a wide range of
data sources, creating a semantically compatible layer on top of which
ML algorithms may be fed. SHAR-Q13 combines data from many data sources
and offers a semantic interoperable layer on top of which value- added
services are deployed; they employ data from prior sources to create
value in various forms (ML predictions, marketplaces, etc.).
It is worth noting that the majority of ideas in the literature do not
examine the translation of heterogeneous data from disparate sources
into semantically interoperable data (uplift), and none really consider
the reverse operation (downlift). Furthermore, verifying data that has
been transmitted is a critical problem to guarantee that the data is not
only compliant with the ontology, but also correct and valid (e.g., a DR
signal does not increase the load above certain dangerous thresholds).
The CIM includes methods for both uplift and downlift, as well as a
bidirectional translation mechanism. It should be noted that this is
critical in order for non-ontology-based systems to consume the data
being transmitted. Furthermore, as previously stated, all communications
take place in a secure XMPP network. These are, to the best of the
writers’ knowledge, unique and original elements of the CIM.