Issue 
EPJ Appl. Metamat.
Volume 11, 2024



Article Number  11  
Number of page(s)  18  
DOI  https://doi.org/10.1051/epjam/2023003  
Published online  03 May 2024 
https://doi.org/10.1051/epjam/2023003
Research article
Electromagnetic cloak design with monoobjective and biobjective optimizers: seeking the best tradeoff between protection and invisibility
^{1}
Aix Marseille Univ, CNRS, Centrale Med, Institut Fresnel, Marseille, France
^{2}
UMI 2004 Abraham de MoivreCNRS, Imperial College London, SW7 2AZ, London, UK
^{3}
The Blackett Laboratory, Department of Physics, Imperial College London, London SW7 2AZ, UK
^{*} email: ronald.aznavourian@fresnel.fr
Received:
20
July
2023
Accepted:
27
November
2023
Published online: 3 May 2024
We revisit the design of cloaks, without resorting to any geometric transform. Cancellation techniques and anomalous resonances have been applied for this purpose. Instead of a deductive reasoning, we propose a novel monoobjective optimization algorithm, namely a ternary grey wolf algorithm, and we adapt a biobjective optimization algorithm. Firstly, the proposed chaotic ternary grey wolf algorithm searches threevalued spaces for all permittivity values in the cloak while minimizing the summation of a protection criterion and an invisibility criterion. Secondly, a biobjective genetic algorithm is adapted to find pairs of optimal values of invisibility and protection.
Key words: Monoobjective optimization / biobjective optimization / cloak design / protection / invisibility
© R. Aznavourian et al., Published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
Since the publication of works by the teams of Leonhardt [1] and Pendry [2] in the same issue of the Science magazine over 16–18 years ago, cloaking has become a mature research area optics. It is by now well known that one can design invisibility cloaks via geometric transforms that either lead to the anisotropic heterogeneous material parameters (e.g. rank2 tensors of permittivity and permeability in optics), see [2], or spatially varying, yet scalar valued, parameters [1]. The latter is achieved through conformal maps, hence constrained to the 2D case, and besides from that the cloak is of infinite extent. There is yet a third route to cloaking, that relaxes the severe material constraints in [2] (notably some infinite anisotropy on the inner boundary of cloak, rooted in the blow up of a point onto a ball of finite extent known as invisibility region, that can only be achieved over a narrow frequency bandwidth in practice [3]), and avoids the infinite extent of the cloak in [1]: socalled carpet cloaking is a combination of the previous two approaches that is based on quasiconformal grids [4]. This third route requires only some moderate anisotropy, but only achieves invisibility for an object placed on a mirror.
Some works on cloaking focus on the mathematical aspects that are connected to famous inverse problems in particular on electric impedance tomography [5,6] wherein one wishes to uniquely determine the conductivity within a bounded region, by applying a known static voltage to the surface and recording the resulting current at the boundary (a DirichlettoNeumann map). The DirichlettoNeumann map determines the conductivity [7], but this can only happen if the conductivity is scalarvalued, positive and finite. However, if some of these conditions are not met (e.g. the conductivity is matrix valued) electric impedance tomography fails [8]. This has been exploited to create nonunique conductivities sharing the same boundary measurements [9].
1.1 Main contributions
In the present work, we would like to revisit the design of cloaks, without resorting to any geometric transform. Not surprisingly, there is prior work that explored this alternative route, notably through scattering cancellation techniques [10,11] and anomalous resonances [12,13]. However, our rational for the design of cloaks is not based on a deductive reasoning, but rather on some optimization algorithm, and more precisely on some natureinspired optimizer known as the Grey Wolf Optimizer (GWO). Here again, one may point out former work on design of invisibility cloaks [14,15] based on topology optimization [16]. In [17], some monoobjective genetic optimization algorithm has been applied to estimate the best value, in terms of bias with respect to freespace propagation conditions, of six parameters which define the desired cloak. However, we stress that the nature of the optimization algorithms we shall use here is radically different: We aim at estimating a very elevated number of parameters, compared to some previous works such as [17], and we aim at minimizing two contradictory criteria instead of one.
1.2 Layout of the paper
In Section 2, we introduce the cloak design problem, pointing out the need for the minimization of two antagonist criteria: an invisibility criterion and a protection criterion. In Section 3, we provide a stateoftheart of monoobjective and biobjective optimization algorithms. We focus on the monoobjective grey wolf optimizer, and on the biobjective non dominated sorting genetic algorithm. In Section 4, we propose a novel ternary version of the grey wolf algorithm, which is dedicated to search spaces with three values. We name it chaotic ternary grey wolf optimizer (CTGWO). In Section 5, we present the results obtained on cloak design with the monoobjective approach, including CTGWO, and with the biobjective approach involving Nondominated Sorting Genetic Algorithm  II (NSGAII). In Section 6 we discuss the results obtained. We point out the superiority of CTGWO over comparative algorithms in the monoobjective approach; and we emphasize the interest of the biobjective approach for an enduser. Conclusions are drawn in Section 7.
1.3 Notations
The following notations are used throughout the paper: manifolds are denoted by blackboard bold, 𝔸, matrices by boldface uppercase roman, A. Vectors are denoted by boldface lowercase roman, a, and scalars by lowercase or uppercase roman, a, b or A. The P scalar components of a vector a are accessed via a^{1},a^{2},...,a^{P}, such that $a={\left[{a}^{1},{a}^{2},\dots ,{a}^{P}\right]}^{T}$with superscript T denoting the transpose. The interval of real values between scalars a and b is denoted by [a : b] with square brackets. A set of values is denoted by {a,...,b} with curly brackets.
The symbol ∘ denotes the Hadamard (also called componentwise) product of two vectors: a ∘ b is a vector whose K components are equal to a_{1}b_{1}, a_{2}, b_{2}, … , a_{K}b_{K}. The symbol a means elementwise absolute value and is a vector whose K components are equal to a_{1}, a_{2}, … , a_{K}.
2 Cloak design problem: definition of experimental conditions, protection and invisibility criteria
We consider the 2D scattering problem sketched in Figure 1a. A point source is located at (x_{s},y_{s}) and radiates from freespace (in blue color) in the vicinity of the yellow zone S_{1} which is the area to be protected. This yellow region is cloaked by the green rectangular zone. For computational purposes, the blue freespace zone is surrounded by Perfectly Matched Layers (PMLs) that model an unbounded medium [18]. The scalar scattering problem amounts to finding the total scalar field u such that:
$$\text{div}\left[\sigma \hspace{0.17em}grad\hspace{0.17em}u\right]+{k}_{02}^{}\chi u={\delta}_{S},$$(1)
where k_{0} is the freespace wavenumber (associated with the unbounded region of space outside of the object and its surrounding cloak and corresponding to a freespace wavelength of λ_{0} = 2π/k_{0}), and σ and χ represent the scalar material properties. In the context of acoustic pressure waves in isotropic nonviscous fluids, σ is the inverse of density and χ the inverse of compressibility, and p is the amplitude of the pressure wave. For antiplane shear waves in isotropic solids, σ is the shear modulus and χ the density, whereas u stands for the component of the displacement field perpendicular to the (xy)plane. Finally, for electromagnetic waves in transverse magnetic (TM) polarization σ is the inverse of the relative permittivity and χ the relative permeability. The relative permittivity will be denoted by ε in the rest of the paper. This is when u represents the component of the magnetic field perpendicular to the (xy)plane. The roles of permittivity and permeability are interchanged in transverse electric (TE) polarization, whereby the electric field is perpendicular to (xy)plane. All these physical setups are equivalent from a mathematical standpoint. In the rest of the paper, the considered wavelength is λ = 500 nm the side of a triangle mesh has length λ/6 in the freespace; and λ/12 in the cloak and the protected zone.
In what follows, we focus on the TM case. In this scalar Helmholtz equation (1), one can still consider a certain form of anisotropy [19], provided that σ can be written as a 2 × 2 matrix.
Indeed, even if isotropic cloak and protected area only are considered here, the implementation of the PMLs relies on anisotropy (and absorption), see [18]. As in standard topology optimization, our design space is constructed on the mesh of the cloak shown in Figure 1. Each triangular element constitutes a voxel that can be filled with a particular material whose isotropic physical properties are represented by the scalar quantity σ The practical Finite Elements discretization and implementation of the problem follow quite closely those described in reference [20] and its associated ONELAB open source tutorials found in reference [21]. In short, a first dummy run allows to initialize a table of discrete values of γ for a particular mesh (one entry of the table corresponds to one triangle of the mesh in the design region), and this table is used to control discrete γ values throughout the optimization procedure using the GetDP, GmshRead and ScalarField functions [22]. Then, equation (1) is solved using second order Lagrange elements in a standard manner.
Finally, we are in a position to define the optimization criteria. The first one, C_{1 }is a protection criterion, the integral over the region to be protected of the square norm of the field:
$${C}_{1}=\frac{1}{\left{S}_{1}\right}{\displaystyle \underset{{S}_{1}}{{\displaystyle \int}}}u{}^{2}\text{d}S,$$(2)
where S_{1} is the area of S_{1 }in Figure 1.
The second one, C_{2} is an invisibility criterion, the integral over the freespace region of interest of the square norm of the difference between the field and a reference field denoted u_{0}
$${C}_{2}=\frac{1}{\left{S}_{2}\right}{\displaystyle \underset{{S}_{2}}{{\displaystyle \int}}}u{u}_{0}{}^{2}\text{d}S,$$(3)
where S_{2} is the area of S_{2 }in Figure 1 and u_{0 }is the freespace solution to the problem (i.e. the scalar Green function of the problem without any cloak or region to protect).
Our goal is to minimize C_{1 }and C_{2 }to achieve both protection and invisibility. Intuitively, we expect there should be some tradeoff between protection (i.e. a vanishing field u inside the yellow region) and invisibility (i.e. u as close as possible to u_{0 }in the blue region). For instance, surrounding the yellow region by an infinite conducting boundary would ensure a vanishing field u inside the orange region, but u and u_{0 }would be very different in freespace green region S_{2 }due to a large scattering. On the other hand, if we consider freespace in the green region, then S_{1}, the cloak and S_{2 }are impedanced matched at the interfaces between all three regions and thus u = u_{0} in S_{2} (the cloak and S_{1 }being transparent). But in that case there is no protection at all in S_{1.} Depending upon their need, cloak designers might just wish to give more weight to criterion C_{1 }or C_{2} C_{1 }and C_{2} depend both on P parameters, where P is equal to the number of voxels (equivalently triangles) in the cloak. These parameters, denoted by K^{1}, K^{2}, … , K^{i}, … , K^{P} take their values in a socalled ‘search space', which can be either discrete or continuous. In this paper we will firstly consider the realistic case where three possible permittivity values can be associated with each triangle. These values correspond to three different materials and yield the search space {7,10,12} Equivalently, once these values are set, we may access them through the index values {0,1,2} This will compound our ‘ternary search space'. Secondly, we will perform a study which is less realistic: in our simulations the permittivity for each voxel may be any real value in [7:12] (up to seven decimal digits).
In a nutshell, we notice that we face a biobjective problem with either ternary, or continuous search spaces. In the rest of the paper, the criteria C_{1 }and C_{2} will also be denoted by f_{1} (x):ℝ^{P} ↦ ℝ_{+} and f_{2} (x):ℝ^{P} ↦ ℝ_{+}. Vector x contains the parameter values K^{1}, K^{2}, … , K^{i}, … , K^{P}.
Fig. 1 Sketch of the scattering problem: Perfectly Matched Layers (PMLs, light grey) surround the freespace region S_{2} (cyan) that contains the source, the cloak (blue) and the protected region S_{1} (orange). 
3 Global singleobjective and biobjective optimization methods
Optimization algorithms are meant to retrieve the location of the minimum value reached with a set of parameters. In the case of singleobjective optimization, only one function is considered for optimization; in the case of biobjective optimization, two functions should be minimized simultaneously.
Monoobjective problems are encountered in wave propagation phenomena, for instance for the structural optimization of a twodimensional photonic crystal [23], or to perform the inversion of ellipsometric data [24], thanks to good convergence properties of bioinspired optimization methods. Multiobjective problems are faced when a tradeoff should be found between two antagonistic objectives, in particular for material design as in [25], but also when considering electromagnetic wave propagation, for instance when designing optical diffracting devices [26].
We assume that P parameters should be estimated: K^{1}, K^{2}, … , K^{i}, … , K^{P}where P ≥ 1. We remind that, as explained in Section 2, in the considered problem, P is equal to the number of voxels in the cloak. The following notations will be used:
– P is the number of expected parameters, which are indexed with i.
– n denotes one iteration and n_{max} the maximum allowed number of iterations.
– f(.) is a function to be optimized, also called the criterion. It depends on the P parameters mentioned above. In this paper, unless specified, minimization problems are considered.
In the case of a singleobjective optimization, there is one function f(.) to be minimized. In the case of biobjective optimization, there are two functions f_{1}(.) and f_{2}(.) to be minimized.
Both GWO and NSGAII are agentbased algorithms i.
– x_{q}(n) is a vector corresponding to an agent q = 1,...,Q at iteration n. It takes the form of a vector with a Ptuple of tested values x_{q}(n)=[K^{1},K^{2},...,K^{P}]^{T}.
In Section 3.1, we give a background on a singleobjective optimization: the grey wolf algorithm (GWO). In Section 3.2, we give a background on a biobjective optimization algorithm, namely nondominated sorting genetic algorithm (NSGA) [27] and a fast version (NSGAII) [28].
3.1 Singleobjective optimization: Grey Wolf Optimizer
The GWO is a natureinspired optimizer based on the observation of the social life of grey wolves in nature [29]. In this method an agent is called a wolf. It simulates the common behaviour and hunting strategies of grey wolves in their environment. The seminal GWO searches a continuous space [29]. Among the search agents, there are three leaders α, β, and δ All other agents are the ω wolves.
– x_{α}(n), x_{β}(n), and x_{δ}(n) denote the position of the leaders α, β and δ respectively, at iteration n.
The position of any wolf at iteration n+1 is calculated as:
$${x}_{\text{q}}\left(n+1\right)=\frac{1}{\text{3}}\left({y}_{\alpha}\left(n\right)+{y}_{\beta}\left(n\right)+{y}_{\delta}\left(n\right)\right).$$(4)
It results from the equal contribution of the α, β and δ wolves. These contributions are computed at each iteration iter as follows, for any leader l either α, β or δ:
$${y}_{\text{l}}\left(n\right)={x}_{\text{l}}\left(n\right)b\circ {d}_{\text{l}}\left(n\right),$$(5)
with: ${\text{d}}_{\mathrm{l}}\left(\text{n}\right)=\left{\text{c}\circ \text{}\hspace{0.17em}\text{x}}_{\mathrm{l}}\left(\text{n}\right){\text{x}}_{q}\left(\text{n}\right),\bullet \right$ denoting absolute value.
The vectors b and c are calculated as b = 2a ◦ r_{1}−a and c = 2r_{2.} In these expressions, vectors r_{1, }r_{2} have random components between 0 and 1.
For the i^{th} parameter (i =1,...,P):
The component b^{i} of b is provided by:
the same whatever i.
The component ${d}_{\mathrm{l}}^{i}\left(n\right)$ of d_{l}(n) is provided by:
$${d}_{\text{l}}^{i}\left(n\right)=\left2{r}_{2}{x}_{\text{l}}^{i}{x}_{\text{q}}^{i}\left(n\right)\right,$$(7)
where r_{1} and r_{2} are two random values between 0 and 1; ${x}_{q}^{i}\left(n\right)$ is the i^{th} component of the q^{th} agent at iteration $n;{x}_{\mathrm{l}}^{i}$ is the i^{th} component of leader l.
The component ${y}_{\mathrm{l}}^{i}\left(\text{n}\right)$ of y_{l}(n) is provided by:
$${y}_{\text{l}}^{i}\left(n\right)={x}_{\text{l}}^{i}{b}^{i}{d}_{\text{l}}^{i}\left(n\right).$$(8)
During the hunt, the wolves firstly diverge from each other to search for the prey, or equivalently to encircle it. Secondly, they converge to kill the prey. This is mathematically modeled through the deterministic vector a. The components of vector a are all equal to a, a scalar value which is a key parameter in the algorithm. When a >1 the search agents are obliged to diverge from the prey: this is the exploration phase. Conversely, when a ≤ 1, the search agents are obliged to attack towards the prey: this is the exploitation phase. In the vanilla version of GWO [29], the key parameter a decreased regularly from 2 to 0:
$$a=2\left(1\frac{n}{{\text{n}}_{\mathrm{max}}}\right).$$(9)
In more recent works, various expressions have been proposed for a such as a quadratic [30] or adaptive [31] function. Whatever the version [30, 31] the exploration phase lasts until a =1, then the exploitation phase lasts from a = 1 to a = 0. Storing all values of $f\left({x}_{\alpha}\left(n\right)\right)$ across all iterations from 1 to n_{max} yields a socalled convergence curve. The outcomes of a singleobjective optimization method are mainly the solution x_{α}(n_{max}), but also the convergence curve.
3.2 Biobjective optimization: nondominated sorting genetic algorithm
Nondominated sorting genetic algorithms (NSGA and NSGAII) are inspired by Darwin's rules of evolution. In this method an agent is called a chromosome. The interest of multiobjective optimization has already been emphasized in physical phenomenon modeling for instance in [32], where NSGAII is used to model turbulence; or in the design of finite 3D periodic structures [33], etc. The main steps of the algorithm are as follows: generate a random population of chromosomes, calculate function values f_{1 }and f_{2 }for each chromosome, sort chromosomes in the population, choose parents in the next generation by tournament algorithm, generate children by crossover and mutation, extract a new generation through ranking, and repeat the process from parent choice. The expected outcome of NSGAII is different from the outcome of GWO: For a singleobjective method, we will represent the results as convergence curves, and the solution is a single set of values extracted from the search spaces. We will try to minimize our cost function so that our objective tends as much as possible towards zero. It is up to the user to determine a threshold value for our cost function, from which we will retrieve one optimal solution. For a biobjective method, we will no longer have convergence curves, but Pareto fronts. The solution is composed of several sets of values extracted from the search space and located on the Pareto front. The principle of the Pareto front is that we will represent the value of the first cost function on the horizontal axis, and the value of the second cost function on the vertical axis. Thus, visually, we will see very quickly if a solution favors either f_{1 }or f_{2.} The main method to estimate the quality of a solution is the ‘domination' principle. In fact, for each solution, we will calculate the distance between the point of origin of coordinates (0,0), and the considered solution, with its coordinates. Of course, the solution which has the smallest distance will ‘dominate' the solution which has a larger distance. Considering two solutions x_{q1} and x_{q2}, we can say that x_{q1} ‘dominates' x_{q2}, if the following condition is respected:
$$\left(\left({f}_{1}\left({\text{x}}_{\text{q}1}\right)\le {f}_{1}\left({x}_{\text{q}2}\right)\right)\hspace{0.17em}\text{and}\hspace{0.17em}\left({f}_{2}\left({x}_{\text{q}1}\right)\le {f}_{2}\left({x}_{\text{q}2}\right)\right)\right),$$
and
$$(({f}_{1}\left({x}_{\text{q}1}\right)<{f}_{1}\left({x}_{\text{q}2}\right))\hspace{0.17em}\text{or}\hspace{0.17em}({f}_{2}\left({x}_{\text{q}1}\right)<{f}_{2}\left({x}_{\text{q}2}\right))).$$
It is up to the user to choose the best solution(s) to keep. Indeed, the user may very well want to favour one of the two objectives, or seek the best compromise between the two objectives.
Actually, in the last decades, different methods to determine the ‘domination' principle have been proposed, and then, the ‘nondomination' principle emerged, notably thanks to Srinivas and Deb [27] who proposed the NSGA algorithm [27] and then an improved version, called NSGAII [28]. This fast sorting method by ‘nondomination' has been widely spread by other algorithms as an efficient technique. The particularity of NSGAII is to hierarchize the levels of ‘domination', with a first Pareto front containing only the nondominated solutions, a second Pareto front with the solutions dominated by one or two solutions, and finally, a third Pareto front with all the other solutions, those dominated by more than two solutions. For this last category, we compute ‘crowding distances' for the solutions of this category, then we sort the set of results thus obtained. The ‘crowding distance' is calculated criterion by criterion. For example, for the criterion represented on the horizontal axis, we will first determine the extreme solutions, which we will call ‘min' and ‘max' it being understood that ‘min' will be the solution which will have the smallest value on the horizontal axis, and ‘max' the solution which will have the largest value on the horizontal axis. Considering that we have Q ^{′}< Q solutions on the Pareto front, we will then assign an index to each solution, with the index 1 for ‘min'. and the index Q ^{′} for ‘max' For each solution of index q with 1< q < Q ^{′} we will calculate the distance on the horizontal axis between the (q−1)^{th} solution and the solution (q+1)^{th} solution, and we will divide this distance by the distance on the horizontal axis between ‘max' and ‘min'. Thus, it is this result that will be considered as the crowding distance of each solution.
The next step is to create a ‘descendant'. To do this, we first organize a selection tournament, i.e. we will randomly draw pairs of solutions in the set of solutions, and for each pair of solutions, we will determine which one can become ‘parent', by comparing the hierarchical levels of the Pareto front. For example, if the first solution belongs to the second Pareto front and the second solution belongs to the third Pareto front, then the selection tournament will be won by the first solution because the second Pareto front contains solutions that are better than the third Pareto front. If two solutions belonging to the same Pareto front were drawn, then the solution with the smallest crowding distance would be selected as the ‘parent'. Then, for each pair of ‘parents', we will generate a child who will have the values of the first ‘parent' for some unknowns, and the values of the other “parent” for the other unknowns. This step is called ‘crossover'. Finally, according to a percentage defined beforehand, the values of some unknowns of the child will be slightly modified. This last step is called the ‘mutation'. The whole process is repeated, as many times as there are iterations, and in the end, we obtain the set of ‘optimized' solutions.
4 Chaotic ternary grey wolf algorithm
We propose a ternary version of GWO, which searches specifically ternary spaces with enhanced exploration abilities.
Our first motivation is that, in the considered application, the search space is associated to three values of epsilon. But this method could be applied to other situations and applications, involving for instance sensors with three possible states.
Our second motivation is to propose a method with enhanced exploration properties. Indeed metaheuristics with enhanced exploration properties are of great interest to cope with applications where the objective function exhibits an elevated number of local minima. We aim at achieving such enhanced exploration properties while proposing a ternary map which evolves across iterations, and inserting chaotic sequences in the update rules of the agents.
Our third motivation is to improve the diversity of the agents behavior. Indeed, GWO exhibits premature convergence due to poor diversity of the population of wolves. So we propose to divide the wolf pack into two groups: the first with enhanced ‘exploration' abilities, and the second with ‘exploitation' abilities.
The proposed chaotic ternary GWO is denoted by CTGWO. In Section 4.1, we derive the update rules which relies on specific transform functions depending on a parameter a. We wish to preserve the original philosophy of GWO: the number of leaders ruling the update of the agents is superior to 1, and the parameter a permits to distinguish between an exploration phase at the beginning of the algorithm and an exploitation phase at the end. In Section 4.1, we just assume about parameter a that it decreases from 2 to 0 across the iterations. Then in Section 4.2, we investigate a chaotic expression for a.
4.1 Ternary update rules based on dedicated transform maps
We propose here innovative update rules, dedicated to a ternary search space, performed with the help of an ad hoc transform function. Firstly propose a novel manner to compute the contribution of a leader, and the mean contribution of several leaders. Secondly, we propose an update rule depending on this mean contribution.
4.1.1 Contribution of a leader
We remind that in the continuous case, the contribution ${y}_{\mathrm{l}}^{i}\left(\text{n}\right)$ of a leader l is computed as in equation (8). In this case ${y}_{\mathrm{l}}^{i}\left(\text{n}\right)={x}_{\mathrm{l}}^{i}\hspace{0.17em}{b}_{\mathrm{l}}^{i}\left(\text{n}\right)$ depends on the product ${b}^{i}{d}_{\mathrm{l}}^{i}\left(\text{n}\right)$ which decreases to 0 simultaneously with a, reaching 0 when $\text{n}={\text{n}}_{\hspace{0.17em}\mathrm{max}}.\text{So}{y}_{\mathrm{l}}^{i}\left(n\right)$ is a real value which gets closer to ${x}_{\text{l}}^{i}$ across the iterations. In the following we propose to compute a contribution ${y}_{\mathrm{l}}^{i}\left(\text{n}\right)$ which is a real value between 0 and 2. We propose, as expression of the contribution of leader l:
$${y}_{\mathrm{l}}^{i}\left(\text{n}\right)=\left({x}_{\mathrm{l}}^{i}{b}^{i}{d}_{\mathrm{l}}^{i}\left(\text{n}\right)\right)mod2,$$(10)
where b^{i }is defined as in equation (9) and ${d}_{\mathrm{l}}^{i}(\text{n})$ is defined as in equation (7). Based on the hypothesis that the maximum value of a is 2, we deduce from equations (6) and (7) that the values of ${b}^{i}{d}_{\mathrm{l}}^{i}(\text{n})$ are between −8 and 8. So the values of ${x}_{\mathrm{l}}^{i}{b}^{i}{d}_{\mathrm{l}}^{i}\left(\text{n}\right)$ can be out of the interval [0:2].
The ‘modulo' operator, denoted by mod is meant to enforce the contributions ${y}_{\mathrm{l}}^{i}\left(\text{n}\right)$ to remain into the interval [0:2]. It is defined as follows: whatever the values ${z}_{1}\in \mathbb{R}\hspace{0.17em}\text{and}\hspace{0.17em}{z}_{2}\in {\mathbb{R}}^{*}:$
$${z}_{1}mod{z}_{2}=\{\begin{array}{c}{z}_{1}{z}_{2}\lfloor {z}_{1}/{z}_{2}\rfloor \\ {z}_{2}\end{array}\begin{array}{c}\text{if}\\ \text{if}\end{array}\begin{array}{c}{z}_{1}\ne {z}_{2}\\ {z}_{1}={z}_{2},\text{or}\hspace{0.17em}{z}_{1}=0\end{array}$$(11)
where z_{1} denotes the largest value in ℤ which is smaller than the scalar ${z}_{1}\in \mathbb{R}$.
4.1.2 Weighted contribution of the leaders
We denote by y^^{i}(n) the weighted contribution of four leaders α, β, δ, and ρ:
$${y}^{i}\left(n\right)=\{\begin{array}{c}\frac{1}{\text{3}}({y}_{\alpha}^{i}(n)+{y}_{\beta}^{i}(n)+((1a/2){y}_{\delta}^{i}(n)+a/2{y}_{\rho}^{i}(n)))\\ \frac{1}{\text{3}}({y}_{\alpha}^{i}(n)+{y}_{\beta}^{i}(n)+{y}_{\delta}^{i}(n))\end{array}\begin{array}{c}\text{if}\\ \text{if}\end{array}\begin{array}{c}a>1\\ a\le 1& .\end{array}$$(12)
In equation (12), leader ρ is a wolf which is selected at random among the wolf pack.
We notice that y^^{i}(n) gets closer to $\frac{1}{\text{3}}\left({y}_{\alpha}^{i}\left(\text{n}\right)+{y}_{\beta}^{i}\left(\text{n}\right)+{y}_{\delta}^{i}\left(\text{n}\right)\right)$ when the iteration index increases.
We can now derive a ternary rule which updates the position of any wolf in a ternary search space.
4.1.3 Ternary update rule
We propose an evolving map which enhances exploration at the beginning of the algorithm, and exploitation at the end of the algorithm. Compared to recent works about the binary GWO [34, 35], the new feature in the proposed transform map is of course the division of the map into three regions instead of 2, but also, the fact that the map evolves across iterations: we will introduce a term which is proportional to a in the transform functions and which emphasizes exploration at the beginning of the algorithm and exploitation at the end of the algorithm.
The proposed novel process dedicated to ternary search spaces permits to select either value 0, 1 or 2. In dimension i(i = 1,...,P) wolf q is updated from iteration n to iteration n + 1 as follows:
$${x}_{\text{q}}^{i}\left(n+1\right)=\{\begin{array}{lll}0\hfill & \text{if}\hfill & r\ge {\mathrm{\phi \u1d60}}^{u}\left({\displaystyle \stackrel{\u203e}{{y}^{i}\left(n\right)}},a\right)\hfill \\ 1\hfill & \text{if}\hfill & r<{\mathrm{\phi \u1d60}}^{u}\left({\displaystyle \stackrel{\u203e}{{y}^{i}\left(n\right)}},a\right)\text{and}\hspace{0.17em}r\ge {\mathrm{\phi \u1d60}}^{d}\left({\displaystyle \stackrel{\u203e}{{y}^{i}\left(n\right)}},a\right)\hfill \\ 2\hfill & \text{if}\hfill & r<{\mathrm{\phi \u1d60}}^{d}\left({\displaystyle \stackrel{\u203e}{{y}^{i}\left(n\right)}},a\right)\hfill \end{array}$$(13)
where the scalar r is a random value between 0 and 1 and taken from a normal distribution. In equation (13) we introduce two functions, which are necessary to define the ternary map. These functions are denoted by φ^{u }and φ^{d}:
$${\phi}^{u}:\left[0:2\right]\times {\mathbb{R}}_{+}\to \left[0:1\right];y\mapsto {\phi}^{u}\left(y,a\right)$$
$${\phi}^{d}:\left[0:2\right]\times {\mathbb{R}}_{+}\to \left[0:1\right];y\mapsto {\phi}^{d}\left(y,a\right).$$
Function φ^{u} separates the uppermost part of the map from the rest of the map; and function φ^{d }separates the lowermost part of the map from the rest of the map. The basic idea is that if the random number r leads to the region inbetween, the value 1 will be chosen as an updated value. If r leads to the region which is above φ^{u }(resp. below φ^{d}), the value 0 (resp. 2) will be selected. We will now detail the shape of the frontiers between regions 0, 1, and 2. We set a ternary map based on a ‘power function'. The basic function we use is a power function applied to any scalar y and depending on 5 parameters c_{1},c_{2}, c_{3}, c_{4}, and a:
$${\phi}^{p}(y,{c}_{1},{c}_{2},{c}_{3},{c}_{4}a)=((y{c}_{3})/{c}_{1}{)}^{{c}_{2}}+{c}_{4}+\frac{a}{5}\left(exp\left(\frac{{y}^{2}}{2}\right)exp\left(\frac{{(y2)}^{2}}{2}\right)\right).$$(14)
In equation (14), the first term depending on c_{1}, c_{2}, c_{3} and c_{4} gives the overall shape of the function. The second term depending on a permits to get ${\phi}^{p}\left(0,{c}_{1},{c}_{2},{c}_{3}{c}_{4},a\right)\ge 0\hspace{0.17em}\text{and}\hspace{0.17em}{\u1d60}^{p}\left(2,{c}_{1},{c}_{2},{c}_{3},{c}_{4},a\right)\le 1$. We use two versions of this function to define the ternary map. The first one, with c_{3} = 2 and c_{4} =1; the second one, with c_{3} = 0 and c_{4} = 0:
$${\u1d60}^{u}\left(y,a\right)={\u1d60}^{p}\left(y,2,3,2,1,a\right)$$(15)
$${\u1d60}^{d}\left(y,a\right)={\u1d60}^{p}\left(y,2,3,0,0,a\right).$$(16)
The second term depending on a (see Eq. (14)) permits to get values on y = 0 which are slightly larger than 0, and values on y = 2 which are slightly smaller than 1:$${\u1d60}^{P}\left(0,{c}_{1},{c}_{2},{c}_{3},{c}_{4},a\right)=\frac{a}{5}\left(1exp\left(2\right)\right),and{\u1d60}^{P}\left(2,{c}_{1}{c}_{2},{c}_{3},{c}_{4},a\right)=1\frac{a}{5}\left(1exp\left(2\right)\right)$$ in both equations (15) and (16).
So:
$${\u1d60}^{u}\left(0,a\right)={\u1d60}^{d}\left(0,a\right)=\frac{a}{5}\left(1exp\left(2\right)\right)$$(17)
and
$${\u1d60}^{u}\left(2,a\right)={\u1d60}^{d}\left(2,a\right)=1\frac{a}{5}\left(1exp\left(2\right)\right).$$(18)
The functions ᵠ^{u} (y, a) and ᵠ^{d} (y, a) defined in equations (15) and (16) can then be used in equation (13) to get the ‘Power' transform map.
4.1.4 Representation of the ternary transform map and interpretation
Figure 2 shows the ‘Power’ ternary map. In each case the uppermost region maps for 0, the central region maps for 1, and the lowermost region maps for 2. It can be noticed that the shape of the functions φ^{u} and φ^{d} is consistent with the derivations in equations (17) and (18).
We can check that:
The term which is proportional to a in equation (14) yields a map which evolves across the iterations; this is an important difference compared to the binary map presented in [34];
For a value of parameter a which is 2, either the value 0 or 2 may be selected with an elevated probability when all leader contributions are equal to 0, or 2;
For a value of parameter a which is 0, the value 0 (resp. 2) is selected with probability 1 when $\stackrel{\u203e}{{y}^{i}\left(\text{n}\right)}}=0\left(\text{resp}.{\displaystyle \stackrel{\u203e}{{y}^{i}\left(\text{n}\right)}}=2\right);$

Whatever the value of a, either the values 0, 1, or 2 may be selected when y^^{i}(n)= 1.
Indeed, as defined in equation (10) the values of ${y}_{1}^{i}\left(\text{n}\right)$ are real and between 0 and 2, whatever i and 1. In Section 4.2 we embed chaotic sequences in the expression of a to improve again the exploration abilities of our algorithm.
Fig. 2 Ternary ‘Power’ selection map with different values of parameter a. (a) a = 2; (b) a = 1; (c) a = 0. 
4.2 Embedding chaotic sequences in the ternary grey wolf optimizer
4.2.1 Chaotic expression of parameter ‘a’ for improved exploration abilities
We modify the expression of parameter a with respect to other versions of GWO, and propose:
$$a=2(1{(\frac{n}{{n}_{\mathrm{max}}})}^{({\eta}_{q}(1+\Gamma (c\frac{1}{q},n)))}).$$(19)
In equation (19), we notice that, for the first time in this paper, the expression of a depends on the agent index q. Firstly, we reposition the worst agents closer to the three leaders, with a value of η which depends on the score of the agent: for the worst half of the agents (associated with the largest scores), ${\eta}_{\text{q}}=\frac{2}{\text{3}};$ for the best half of the agents (associated with the smallest scores), ${\eta}_{\text{q}}=\frac{3}{\text{2}}.$ Secondly, inserting a chaotic sequence $\Gamma \left({c}_{q}^{1},n\right)$ enhances the variability of the behavior of the agents. ${c}_{\text{q}}^{1}$ is the initial value of the chaotic sequence, which is different for every agent q. The principles of the proposed method is that the a parameter of the GWO which rules the displacement of the agents is perturbed through the value of a chaotic sequence.
Choosing a different value of ${c}_{\text{q}}^{1}$ or each agent permits to emphasize diversity in the displacement of the agents. Meanwhile, we choose a sequence with one attractor to ensure that $\Gamma ({c}_{q}^{1},{n}_{\mathrm{max}})$ is the same whatever q.
4.2.2 Construction of the chaotic sequences
To privilege exploration abilities, the behaviors of the agents should differ one from the other. To privilege exploitation abilities at the end of the algorithm, the agents behavior should get closer to each other while the iteration index increases. So we set the following constraints on the chaotic sequences:
– The values of $\Gamma \left({c}_{q}^{1},\text{n}\right)$ are positive, in the interval [0:1];
– The last value should be $\Gamma \left({c}_{q}^{1},{\text{n}}_{\mathrm{max}}\right)=0$ whatever the initial value $\Gamma \left({c}_{\text{q}}^{1},1\right)$. In this way, at the last iteration n = n_{max}, the behavior of all agents is the same.
To fulfill easily those constraints, we have to choose a sequence with one known attractor. We base our sequence Γ on a logistic sequence, denoted by c(n). Given an initial term c(1), each subsequent term (for n = 2,...,n_{max}) is defined as:
$$c\left(n+1\right)=\kappa c\left(n\right)\left(1c\left(n\right)\right),$$(20)
where $\kappa \in {\mathbb{R}}_{+}^{*}.$ The number of attractors for this sequence depends on the value of κ. Figure 3 shows chaotic sequences with one (Fig. 3a), two (Fig. 3b), or an infinite number of attractors (Fig. 3c). We chose κ = 2.8 in Figure 3a, κ = 3.2 in Figure 3b, κ = 3.99 in Figure 3c. In Figures 3a–3c, each plot with a given color corresponds to a different value for c(1). There are ten chaotic sequences in each case. We choose a sequence such as in Figure 3a, where c (n_{max}) bears the same value, equal to 0.65 approximately, whatever the sequence. For this we set with κ = 2.8.
For any agent q and for n = 1,...,n_{max}, we define the ad hoc chaotic sequence $\Gamma \left({c}_{q}^{1},n\right)$ as follows, based on the logistic sequence of equation (20):
$$\Gamma \left({c}_{q}^{1},n\right)=0.1(c(n)c({n}_{\mathrm{max}})).$$(21)
Hence, for any agent $\text{q}\Gamma \left({c}_{q}^{1}{n}_{\mathrm{max}}\right)=0$. We notice that the initial term ${c}_{\text{q}}^{1}$ is defined as follows:
$${c}_{\text{q}}^{1}=0.1\left(c\left(1\right)c\left({n}_{\mathrm{max}}\right)\right).$$(22)
We set the initial term c(1) of the logistic sequence as a random value between 0 and 1, taken from a normal distribution. As c(1) is a random value, ${c}_{\text{q}}^{1}$ is also random and a different sequence is generated for each agent.
Figure 4 shows the sequences of values for a for all agents. We clearly distinguish two families of agents: the first family with ${\eta}_{\text{q}}=\frac{3}{\text{2}}$ and rather elevated values of a, and the second family with ${\eta}_{\text{q}}=\frac{2}{\text{3}}$ and smaller values of a, which tend more rapidly towards 0.
Fig. 3 Chaotic sequences with various values of chaos parameter κ. (a) κ = 2.80, one attractor; (b) κ = 3.20, two attractors; (c) κ = 3.99, an infinite number of attractors. 
Fig. 4 Chaotic values of parameter a for all agents, and all iterations n =1,..,50. 
5 Results: monoobjective and biobjective approaches
5.1 Experimental conditions and metrics
In this section, the test environment is a server running Linux, equipped with 4 Intel(R) Xeon(R) CPU X7560 @ 2.27GHz (64 processors, Hyperthreading activated) and 1000 GB RAM. The software is Python.
We consider cloaks with P = 329 voxels (of triangular shape) and the expected outcomes of the algorithms are vectors x containing permittivity values K^{1}, K^{2}, … , K^{i}, … , K^{P}.
In Section 5.2 we display the results obtained with a monoobjective approach. The criterion which is minimized is f (x):ℝ^{P} ↦ ℝ_{+}:$f(x)=\frac{1}{\text{2}}({f}_{1}(x)+{f}_{2}(x))$. We remind (see Sect. 2) that f_{1} stands for protection, and f_{2 }stands for invisibility.
We remind that topological optimization limits the number of possible materials to two. Instead, and for the first time in this paper, we adapt our new variant CTGWO of the GWO which searches solutions among three materials. We compare the proposed CTGWO with the adaptive mixed GWO in discrete mode [31] (denoted by amixedGWO), and the vanilla continuous GWO [29]. The computational time for one trial of either f_{1 }or f_{2 }is 0.99 sec by trial. For GWO and amixedGWO, we use the expression of a presented in equation (9). For CTGWO, we use the expression of a presented in equation (19). The search space for the values of the permittivity ε is {7,10,12}. CTGWO and amixedGWO access these values via indices retrieved from the discrete search space {1,1,2}. GWO access these values via rounded indices retrieved from the continuous search space [0:2]. We run the three algorithms with Q = 100 agents and n_{max} = 250 iterations, that is, 25 × 10^{3} trials of the objective function. The agents are initialized with random integers between 0 and 2.
In Section 5.3, we display the results obtained with a biobjective approach. The couple of criteria which are jointly minimized are f_{1} (x) :ℝ^{P} ↦ ℝ_{+}, and f_{2} (x):ℝ^{P} ↦ ℝ_{+}. We display the results of GWO, amixed GWO, and the proposed CTGWO; as well as the results obtained by NSGAII in four situations. These five experimental conditions are summarized in Table 1. The ternary search space mentioned in Table 1 is {7,10,12}: NSGAII accesses these values via rounded indices retrieved from the continuous search space [0:2]. The continuous search space mentioned in Table 1 is [7:12]. This last situation is prospective in the sense that we assume that we afford any material with any permittivity value between 7 and 12.
Cloak design experimental conditions: monoobjective approach (GWO, amixedGWO, CTGWO); and biobjective approach (NSGAII).
5.2 Monoobjective approach
In this section, we display the results obtained by CTGWO, amixedGWO, and GWO: the convergence curves in Figure 5; the scores reached by each method, and corresponding protection and invisibility performances in Table 2; the cloak design and wave propagation field in Figure 6.
In Table 2, we display, for GWO, amixedGWO, and CTGWO, the score of wolf a at the last iteration $f\left({x}_{\alpha}\left({\text{n}}_{\mathrm{max}}\right)\right)$. For instance, the solution provided by CTGWO is x_{α}(n_{max})=[7,12,12,...,10] ^{T}. We notice that, in CTGWO, the power map is such that there is still a small probability that the number of the area in which we are going to place ourselves does not correspond to the value obtained, used as a position on the horizontal axis. The interest of such a mismatch is to avoid our algorithm to be locked in a local minimum. The chaotic sequences also play a role in the good behavior of CTGWO shown by the convergence curve in Figure 5. Figure 6f indicates that the protection abilities of the cloak designed by CTGWO are clearly better than for GWO and amixedGWO. Values in Table 2 confirm this impression: the protection criterion is two times smaller, while the mean criterion is also significantly smaller.
Fig. 5 Convergence curve of GWO, amixedGWO, and CTGWO algorithms. 
Comparison of GWO, amixedGWO and CTGWO monoobjective methods in protection (f_{1}) and invisibility (f_{2}) for a rectangular and halfrectangular cloak. n_{max}= 250, Q =100.
5.3 Biobjective approach
In a second step, we used a biobjective genetic algorithm as an optimization method. Thus, we no longer work on convergence curves, but on Pareto fronts, to balance the two minimized criteria.
The parameter values for NSGAII are as follows: mutation and crossover probabilities are p_{m} = 1/P ≃ 0.003 and p_{c} = 0.9; mutation and crossover distribution indices are η_{m} = 20 and η_{c} = 15.
5.3.1 Biobjective ternary optimization with 250 iterations, 100 research agents
In this section, we display the results obtained by NSGAII: the Pareto front in Figure 7; the protection and invisibility performances in Table 3; the cloak design and wave propagation field in Figure 8.
Fig. 6 Column 1: in (a) GWO optimized cloak, in (b) discrete GWO optimized cloak, and in (c), CTGWO optimized cloak; column 2: in (d) GWO result, in (e) Discrete GWO result, and in (f), CTGWO result. 
Fig. 7 Left: Pareto front for NSGAII Table with 250 iterations, 100 research agents and four selected solutions marked in red. Right: Magnified view of the four selected solutions, three of which appear in Table 3. 
5.3.2 Biobjective ternary optimization with 1000 iterations, 200 research agents
In this subsubsection, we display the results obtained by NSGAII: the Pareto front in Figure 9; the protection and invisibility performances in Table 4; the cloak design and wave propagation field in Figure 10.
Fig. 8 Column 1: in (a) best invisibility cloak, in (b) best compromise between invisibility and protection cloak, and in (c), best protection cloak; column 2: in (d) best invisibility result, in (e) best compromise between invisibility and protection result, and in (f), best protection result. See Table 3 for numerical values. 
Fig. 9 Pareto front for NSGAII Table with 250 iterations, 100 research agents, with four selected solutions out of a total of 175. 
5.3.3 Biobjective continuous optimization with 250 iterations, 100 research agents
In this section, we display the results obtained by NSGAII: the Pareto front in Figure 11; the protection and invisibility performances in Table 5; the cloak design and wave propagation field in Figure 12.
Fig. 10 Column 1: in (a) best invisibility cloak, in (b) best compromise between invisibility and protection cloak, and in (c), best protection cloak; column 2: in (d) best invisibility result, in (e) best compromise between invisibility and protection result, and in (f), best protection result. 
Fig. 11 In red, the four selected solutions. 
Comparison of a NSGAII biobjective method in protection and invisibility for a halfrectangular cloak. n_{max} = 250, Q = 100.
5.3.4 Biobjective continuous optimization with 1000 iterations, 200 research agents
In this section, we display the results obtained by NSGAII: the Pareto front in Figure 13; the protection and invisibility performances in Table 6; the cloak design and wave propagation field in Figure 14.
Fig. 12 Column 1: in (a) best invisibility cloak, in (b) best compromise between invisibility and protection cloak, and in (c), best protection cloak; column 2: in (d) best invisibility result, in (e) best compromise between invisibility and protection result, and in (f), best protection result. 
Comparison of a NSGAII biobjective method in protection and invisibility for a halfrectangular cloak. n_{max}=1000,Q = 200 .
5.4 Spectral tolerance and comparison with random cloaks
To further verify the robustness of our approach to a change in frequency, we compute the invisibility and protection criteria for several frequency values around the central freespace wavelength λ_{0}. We compare the results obtained on the optimized cloaks to the results obtained with random cloaks obtained by filling each voxel of the design space by an arbitrary integer (resp. real) value in {7,10,12} (resp. [7:12]). For the central freespace wavelength λ_{0}: for both C_{1} (protection) and C_{2} (invisibility) these criteria are 20 times smaller for the optimized cloaks: 1.5 × 10^{−4} vs 2.2 × 10^{−3} for C_{1}; and 6 × 10^{−5} vs 1.2 × 10^{−3} for C_{2}.
We infer from these results that it is worth optimizing the structure of the cloaks: this good behavior at the target frequency is obtained at the expense of slightly worse performances at other frequencies, in particular for the invisibility which appears to be quite resonant (optimized cloaks lead to a better invisibility than any random realizations for λ/λ_{0} ∈ [0.95 : 1.03]), as opposed to the protection which turns out to be more broadband (the cloak optimized with CNSGAII+ lead to a better protection than any random realizations for λ/λ_{0} ∈ [0.82 : 1.22]). We note that the optimization method operating over a continuous space lead to a more broadband response (see red and purple curves in Fig. 15).
6 Discussion
We distinguish two situations: the ternary case, where we afford three possible values for epsilon, and the continuous case, where we afford any real value between 7 and 12.
In the ternary case, when n_{max}= 250 and Q = 100, the monoobjective approach implemented with the proposed CTGWO yields the best results in terms of protection, that is, 1.56 × 10^{−4} and score, that is, $\frac{{f}_{1}+{f}_{2}}{2}=5.128\times \hspace{0.17em}{10}^{4}$. Figure 6f confirms this with a good protection behavior obtained with CTGWO. So in these conditions the proposed CTGWO algorithm yields the best tradeoff between protection and invisibility.
NSGAII yields the best result in terms of invisibility, that is, 5.363 × 10^{−4}, at the expense of a mean value $\frac{{f}_{1}+{f}_{2}}{2}=8.063\times \hspace{0.17em}{10}^{4}$ which is larger than the score obtained by CTGWO. However the interest of NSGAII is to enable an enduser to choose to favor one criterion (for instance invisibility) rather than the other. When n_{max} = 1000 and Q = 200 affording therefore 8 times more trials of each objective function, the best protection reaches 1.233×10^{−4}, with an associated mean value $\frac{{f}_{1}+{f}_{2}}{2}=4.021\times {10}^{4}$. In Figures 8 and 10, we can check the coherence of the shape of the wavefront and the score values obtained. A less realistic manner to improve the results is to enable the search for any real permittivity value between 7 and 12.
In the continuous case, still with 2.5 ×10^{4} trials of both f_{1} and f_{2}, we reach a best protection criterion 7.845 × 10^{−5}, a best invisibility criterion 3.274 × 10^{−5}, and a best mean value 1.416 × 10^{−4}. The results are improved but the corresponding physical constraints are very much strengthened, as we assume we can choose any material for any voxel in the cloak. In the continuous case, with n_{max} =1000 and Q = 200, we reach very low best protection (6.30584 × 10^{−5}), best invisibility (2.59173 × 10^{−5}), and best mean (1.10285 × 10^{−4}) values. The visual results in the continuous case (see Figs. 12 and 14) are very convincing, and clearly illustrate the difference between the best protection, the best invisibility, the best compromise, and the best protection cases. Finally, the tolerance with respect to the operating freespace incident wavelength showed a broadband behavior in terms of protection and a rather narrowband behavior for the invisibility criterion, as shown by the comparison to random cloaks in Figure 15.
Fig. 13 In red, the four selected solutions. 
Fig. 14 Column 1: in (a) Best invisibility cloak, in (b) Best compromise between invisibility and protection cloak, and in (c), Best protection cloak; Column 2: in (d) Best invisibility result, in (e) Best compromise between invisibility and protection result, and in (f), Best protection result. 
Fig. 15 Spectral tolerance of the best candidates in terms of protection (C_{1}, left panel) and invisibility (C_{2}, right panel). The wavelength in abscissa is normalized by λ0, the freespace wavelength targeted for the optimization process. The blue curves correspond to the CTGWO method, the orange ones to TNSGAII, the green ones to TNSGAII+, the red ones to CNSGAII and the purple ones to CNSGAII+. The five grey (resp. black) curves in each plot represent the spectral behavior of C_{1} and C_{2} for random cloaks obtained by filling each voxel of the design space by an arbitrary integer (resp. real) value in {7, 10, 12} (resp. [7:12]). 
7 Conclusion
In this work, we address the issue of an electromagnetic cloak's design in the transverse magnetic TM polarization (whereby the magnetic field is perpendicular to the (xy)plane containing the computational domain). This polarization has been chosen as the model can then be adapted to plasmonics by assuming some Drudelike dispersion in the permittivity [36]. More precisely, this means that σ in equation (1) should be frequency dependent, and χ is set to 1. However, the transverse electric polarization would be also worth investigating, in that case σ is set to 1 in equation (1), and χ plays the role of the permittivity.
Our objective is here to achieve the best compromise between protection and invisibility for TM waves. In other words, we are looking for the best tradeoff between protection while considering the wave amplitude inside the cloak, and invisibility while considering the wave behavior outside of the cloak. This is a large scale biobjective optimization problem. We propose two approaches: in the first one we transform this problem into a monoobjective optimization problem and seek for the best mean value of protection and invisibility criteria. In the second one we look for the best protection, the best invisibility, and the best tradeoff with a biobjective optimization algorithm. GWO is a well known monoobjective optimization algorithm which reaches the desired solution with a reduced number of iterations. We propose a novel monoobjective version of GWO, namely the chaotic ternary GWO, with three main innovations: ad hoc update rules to face ternary search spaces, evolving map and chaotic sequences to improve exploration abilities, and division of the pack into two groups to improve diversity. We apply this algorithm, and the comparative GWO and amixedGWO (both monoobjective) as well as the NSGAII (biobjective) to solve the considered cloak design problem. In the considered cloaking application, and with the help of 2.5 × 10^{4} evaluations of these criteria, the proposed CTGWO algorithm yields the best mean value, that is, 5.128 × 10^{−4}, hence the best ‘tradeoff’, between protection and invisibility.
It surpasses GWO (29.871 × 10^{−4}), amixedGWO (6.526 × 10^{−4}); and the best tradeoff provided by NSGAII (5.306 × 10^{−4}). A possible prospect could consist in altering the ternary map in CTGWO to favor one particular material, among the three which are chosen to build the cloak. This could help in implementing the cloak with a preferred material.
Funding
This research was funded by BTU Trust and ANRT, grant number [CONVENTION CIFRE N° 2020/0078].
Conflicts of interest
No author associated with this paper has disclosed any potential or pertinent conflict which may be perceived to have impending conflict with this work.
Data availability statement
This article has no associated available data.
Author contribution statement
Ronald Aznavourian: Conceptualization, Software, Validation, Writing. Guillaume Demesy: Software, Validation, Formal analysis. Sébastien Guenneau: Conceptualization, Validation, Supervision, Final preparing. Julien Marot: Conceptualization, Software, Editing, Writing, Supervision, Final preparing.
References
 U. Leonhardt, T. Philbin, Geometry and light: the science of invisibility (Courier Corporation, 2010) [Google Scholar]
 J.B. Pendry, D. Schurig, D.R. Smith, Controlling electromagnetic fields, Science 312, 1780 (2006) [Google Scholar]
 M. Cassier, G.W. Milton, Bounds on Herglotz functions and fundamental limits of broadband passive quasistatic cloaking, J. Math. Phy, 58, 071504 (2017) [CrossRef] [Google Scholar]
 J. Li, J.B. Pendry, Hiding under the carpet: a new strategy for cloaking, Phys. Rev. Lett. 101, 203901 (2008) [Google Scholar]
 R. Kohn, M. Vogelius, Determining conductivity by boundary measurements, Commun. Pure Appl. Math. 37, 289 (1984) [CrossRef] [Google Scholar]
 J.M. Lee, G. Uhlmann, Determining anisotropic realanalytic conductivities by boundary measurements, Commun. Pure Appl. Math. 42, 1097 (1989) [CrossRef] [Google Scholar]
 R.V. Kohn, H. Shen, M.S. Vogelius, M.I. Weinstein, Cloaking via change of variables in electric impedance tomography, Inverse Probl. 24, 015016 (2008) [CrossRef] [Google Scholar]
 J. Sylvester, G. Uhlmann, A global uniqueness theorem for an inverse boundary value problem, Ann. Math. 125 153 (1987) [CrossRef] [Google Scholar]
 A. Greenleaf, M. Lassas, G. Uhlmann, Anisotropic conductivities that cannot be detected by EIT, Physiol. Meas. 24, 413 (2003) [CrossRef] [Google Scholar]
 A. Alù, N. Engheta, Achieving transparency with plasmonic and metamaterial coatings. Phys. Rev. E 72, 016623 (2005) [Google Scholar]
 P.Y. Chen, J. Soric, A. Alù, Invisibility and cloaking based on scattering cancellation, Adv. Mater. 24, OP281 (2012) [Google Scholar]
 G.W. Milton, N.A.P. Nicorovici, R.C. McPhedran, V.A. Podolskiy, A proof of superlensing in the quasistatic regime, and limitations of superlenses in this regime due to anomalous localized resonance, Proc. Roy. Soc. A: Math. Phys. Eng. Sci. 461, 3999 (2005) [Google Scholar]
 G.W. Milton, M. Briane, J.R. Willis, On cloaking for elasticity and physical equations with a transformation invariant form, N. J. Phys. 8, 248 (2006) [CrossRef] [Google Scholar]
 M.P. Bendsøe, O. Sigmund, Material interpolation schemes in topology optimization, Arch. Appl. Mech. 69, 635 (1999) [CrossRef] [Google Scholar]
 B. Vial, Y. Hao, Yang, Topology optimized alldielectric cloak: design, performances and modal picture of the invisibility effect, Opt. Express 23, 23551 (2015) [CrossRef] [Google Scholar]
 J. Andkjær, O. Sigmund, Topology optimized lowcontrast alldielectric optical cloak, Appl. Phys. Lett. 98, 021112 (2011) [CrossRef] [Google Scholar]
 L. Pomot, C. Payan, M. Remillieux, S. Guenneau, Acoustic cloaking: geometric transform, homogenization and a genetic algorithm, Wave Motion 92, 102413 (2020) [CrossRef] [Google Scholar]
 F.L. Teixeira, W.C. Chew, General closedform PML constitutive tensors to match arbitrary bianisotropic and dispersive linear media, IEEE Microwave Guided Wave Lett. 8, 223 (1998) [CrossRef] [Google Scholar]
 F. Zolla, G. Renversez, A. Nicolet, B. Kuhlmey, S. Guenneau, D. Felbacq, A. Argyros, S. LeonSaval, Foundations of photonic crystal fibres, 2nd ed. (Imperial College Press, 2012) [Google Scholar]
 E. Kuci, F. Henrotte, P. Duysinx, C. Geuzaine, Design sensitivity analysis for shape optimization based on the Lie derivative, Comput. Methods Appl. Mech. Eng. 317, 702 (2017) [CrossRef] [Google Scholar]
 C. Geuzaine, ONELAB. http://onelab.info/, (accessed November 14, 2022) [Google Scholar]
 GetDP documentation. http://getdp.info/#Documentation (accessed November 14, 2022) [Google Scholar]
 J. Smajic, C. Hafner, D. Erni, Optimization of photonic crystal structures, J. Opt. Soc. Am. A 21, 2223 (2004) [CrossRef] [Google Scholar]
 T. Ross, G. Cormier, Particle swarm optimization for ellipsometric data inversion of samples having an arbitrary number of layers, J. Opt. Soc. Am. A 27, 319 (2010) [CrossRef] [Google Scholar]
 P. Zhang, Y. Qian, Q. Qian, Multiobjective optimization for materials design with improved NSGAII, Mater. Today Commun. 28, 102709 (2021) [CrossRef] [Google Scholar]
 W. Costa, H. Camporez, M. Pontes, M. Segatto, H. Rocha, J. Silva, M. Hinrichs, A. Paraskevopoulos, V. Jungnickel, R. Freund, Increasing the power and spectral efficiencies of an OFDMbased VLC system through multiobjective optimization, J. Opt. Soc. Am. A 40, 1268 (2023) [CrossRef] [Google Scholar]
 N. Srinivas, K. Deb, Muiltiobjective optimization using nondominated sorting in genetic algorithms, Evol. Comput. 2, 221 (1994) [CrossRef] [Google Scholar]
 K. Deb, A. Pratap, S. Agarwal, T.A.M.T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGAII, IEEE Trans. Evol. Comput. 6, 182 (2002) [CrossRef] [Google Scholar]
 S. Mirjalili, S. Mohammad Mirjalili, A, Lewis, Grey wolf optimizer, Adv. Eng. Softw. 69, 46 (2014) [CrossRef] [Google Scholar]
 N. Mittal, U. Singh, B. Singh Sohi, Modified grey wolf optimizer for global engineering optimization, Appl. Comput. Intell. Soft Comput. 2016, 7950348 (2016) [Google Scholar]
 B. Martin, J. Marot, S. Bourennane, Mixed grey wolf optimizer for the joint denoising and unmixing of multispectral images, Appl. Soft Comput. 74, 385 (2019) [CrossRef] [Google Scholar]
 F. Waschkowski, Y. Zhao, R. Sandberg, J. Klewicki, Multiobjective CFDdriven development of coupled turbulence closure models, J. Comput. Phys. 452, 110922 (2022) [CrossRef] [Google Scholar]
 Y. Chen, S. Zhou, Q. Li, Multiobjective topology optimization for finite periodic structures, Comput. Struct. 88, 806 (2010) [CrossRef] [Google Scholar]
 E. Emary, H.M. Zawbaa, A. Ella Hassanien, Binary grey wolf optimization approaches for feature selection, Neurocomputing 172, 371 (2016) [CrossRef] [Google Scholar]
 P. Hu, J.S. Pan, S.C. Chu, Improved Binary Grey Wolf Optimizer and Its application for feature selection, Knowledge Based Syst. 195, 105746 (2020) [CrossRef] [Google Scholar]
 M. Kadic, S. Guenneau, S. Enoch, P.A. Huidobro, L. MartinMoreno,F.J. GarcíaVidal, J. Renger, R. Quidant, Transformation plasmonics, Nanophotonics 1, 51 (2012) [CrossRef] [Google Scholar]
Cite this article as: Ronald Aznavourian, Guillaume Demesy, Sébastien Guenneau, Julien Marot, Electromagnetic cloak design with monoobjective and biobjective optimizers: seeking the best tradeoff between protection and invisibility, EPJ Appl. Metamat. 11, 11 (2024)
All Tables
Cloak design experimental conditions: monoobjective approach (GWO, amixedGWO, CTGWO); and biobjective approach (NSGAII).
Comparison of GWO, amixedGWO and CTGWO monoobjective methods in protection (f_{1}) and invisibility (f_{2}) for a rectangular and halfrectangular cloak. n_{max}= 250, Q =100.
Comparison of a NSGAII biobjective method in protection and invisibility for a halfrectangular cloak. n_{max} = 250, Q = 100.
Comparison of a NSGAII biobjective method in protection and invisibility for a halfrectangular cloak. n_{max}=1000,Q = 200 .
All Figures
Fig. 1 Sketch of the scattering problem: Perfectly Matched Layers (PMLs, light grey) surround the freespace region S_{2} (cyan) that contains the source, the cloak (blue) and the protected region S_{1} (orange). 

In the text 
Fig. 2 Ternary ‘Power’ selection map with different values of parameter a. (a) a = 2; (b) a = 1; (c) a = 0. 

In the text 
Fig. 3 Chaotic sequences with various values of chaos parameter κ. (a) κ = 2.80, one attractor; (b) κ = 3.20, two attractors; (c) κ = 3.99, an infinite number of attractors. 

In the text 
Fig. 4 Chaotic values of parameter a for all agents, and all iterations n =1,..,50. 

In the text 
Fig. 5 Convergence curve of GWO, amixedGWO, and CTGWO algorithms. 

In the text 
Fig. 6 Column 1: in (a) GWO optimized cloak, in (b) discrete GWO optimized cloak, and in (c), CTGWO optimized cloak; column 2: in (d) GWO result, in (e) Discrete GWO result, and in (f), CTGWO result. 

In the text 
Fig. 7 Left: Pareto front for NSGAII Table with 250 iterations, 100 research agents and four selected solutions marked in red. Right: Magnified view of the four selected solutions, three of which appear in Table 3. 

In the text 
Fig. 8 Column 1: in (a) best invisibility cloak, in (b) best compromise between invisibility and protection cloak, and in (c), best protection cloak; column 2: in (d) best invisibility result, in (e) best compromise between invisibility and protection result, and in (f), best protection result. See Table 3 for numerical values. 

In the text 
Fig. 9 Pareto front for NSGAII Table with 250 iterations, 100 research agents, with four selected solutions out of a total of 175. 

In the text 
Fig. 10 Column 1: in (a) best invisibility cloak, in (b) best compromise between invisibility and protection cloak, and in (c), best protection cloak; column 2: in (d) best invisibility result, in (e) best compromise between invisibility and protection result, and in (f), best protection result. 

In the text 
Fig. 11 In red, the four selected solutions. 

In the text 
Fig. 12 Column 1: in (a) best invisibility cloak, in (b) best compromise between invisibility and protection cloak, and in (c), best protection cloak; column 2: in (d) best invisibility result, in (e) best compromise between invisibility and protection result, and in (f), best protection result. 

In the text 
Fig. 13 In red, the four selected solutions. 

In the text 
Fig. 14 Column 1: in (a) Best invisibility cloak, in (b) Best compromise between invisibility and protection cloak, and in (c), Best protection cloak; Column 2: in (d) Best invisibility result, in (e) Best compromise between invisibility and protection result, and in (f), Best protection result. 

In the text 
Fig. 15 Spectral tolerance of the best candidates in terms of protection (C_{1}, left panel) and invisibility (C_{2}, right panel). The wavelength in abscissa is normalized by λ0, the freespace wavelength targeted for the optimization process. The blue curves correspond to the CTGWO method, the orange ones to TNSGAII, the green ones to TNSGAII+, the red ones to CNSGAII and the purple ones to CNSGAII+. The five grey (resp. black) curves in each plot represent the spectral behavior of C_{1} and C_{2} for random cloaks obtained by filling each voxel of the design space by an arbitrary integer (resp. real) value in {7, 10, 12} (resp. [7:12]). 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.