View More View Less
  • 1 Institute of Mathematics, University of Debrecen, H-4002 Debrecen, Pf. 400, Hungary
  • 2 Institute of Mathematics and Computer Science, University of Nyíregyháza, H–4400 Nyíregyháza, Pf. 166, Hungary
Open access

Abstract

In 1975 C. F. Chen and C. H. Hsiao established a new procedure to solve initial value problems of systems of linear differential equations with constant coefficients by Walsh polynomials approach. However, they did not deal with the analysis of the proposed numerical solution. In a previous article we study this procedure in case of one equation with the techniques that the theory of dyadic harmonic analysis provides us. In this paper we extend these results through the introduction of a new procedure to solve initial value problems of differential equations with not necessarily constant coefficients.

Abstract

In 1975 C. F. Chen and C. H. Hsiao established a new procedure to solve initial value problems of systems of linear differential equations with constant coefficients by Walsh polynomials approach. However, they did not deal with the analysis of the proposed numerical solution. In a previous article we study this procedure in case of one equation with the techniques that the theory of dyadic harmonic analysis provides us. In this paper we extend these results through the introduction of a new procedure to solve initial value problems of differential equations with not necessarily constant coefficients.

1 Introduction

A system formed by Walsh functions is an orthonormal system which takes only values 1 and −1. This property, which is why the Walsh system was considered to be an “artificial” orthonormal system by many mathematicians in 1923, the year of its introduction (see [21]), offers a wide range of applications in the world of digital technology. Indeed, the Walsh functions have a great advantage with respect to the classic trigonometric functions

in the sense that computers can be very effective to determine the precise value of any Walsh function at any point.

In the 1970s the potential of piecewise constant orthogonal systems for signal characterization became evident. It was the reason why several researchers began to study intensively the application of Walsh functions in communication and signal processing (see, among others, [1,12,14,16,17]). In [8] Corrington developed a method to solve nth order linear differential equations using previously prepared huge tables of the Walsh-Fourier coefficients of weighted indefinite integrals of Walsh functions.

Corrington derived n different tables for solving an nth order differential equation. In 1975 C. F. Chen and C. H. Hsiao improved the method of Corrington where only one table is needed (see [2]). Moreover, this table contains elements easy to compute (see the matrices in (7) and (8)). This was possible by considering the equivalent system of linear differential equations of first-order. The new approach is much simpler and more suitable for digital computation. Although we must also say that the method of Chen and Hsiao is only suitable for solving linear differential equations with constant coefficients having initial conditions at x = 0.

In 1975 Chen and Hsiao wrote several papers in which they show the performance of their procedure in applications. Papers [6] and [4] present a new approach to the optimal problem by using Walsh functions. In [5] they dealt with the application of Walsh functions to the time-domain-synthesis problem. Paper [3] establishes a clear procedure for the variational problem solution via the Walsh functions technique. On the basis of this method it was also possible to develop a technique for the analysis of time-invariant linear delay-differential equations by the Walsh polynomials approximation (see [7]) and also another for solving first-order partial differential equations by double Walsh series approximation (see [19]).

The basic idea of the method of Chen and Hsiao is to avoid differentiation considering the equivalent integral equations instead the original differential equations, because the Walsh functions are not differentiable. They discretize these integral equations substituting all functions in them, even the integral functions, by the partial sums of Walsh series of these functions. Every component of the exact solution is also substituted by an unknown Walsh polynomial. The aim is to find the coefficients of these polynomials which are obtained after solving a linear system.

However, Chen and Hsiao did not deal with the extensive analysis of the proposed numerical solution. We mean that they did not determine if the linear system is solvable or not, neither did they deal with the estimation of errors. On the other hand, to obtain great accuracy for the numeric solution we need to solve a linear system involving a very large number of variables and equations. The construction of Walsh polynomials from their coefficients also requires time. It is possible to design a procedure to obtain directly the values of the Walsh polynomials? What is the largest class of constant terms where the method works?

In [10] and [11] we started with the analysis of these issues considering the simplest case, i.e. studying the approximation by Walsh polynomials of the solution of one linear differential equation with constant coefficient having an initial condition at x = 0. In other words, we studied the initial value problem

y'+ay=q(x),y(0)=η,
where a,ηR. The Walsh functions are defined on the interval [0,1[, so the solution y should be found in this interval. The continuity of the constant term q(x) on the interval [0,1[ ensures the existence and the uniqueness of the solution, but the continuity on the closed interval [0,1] is not needed. It is enough to suppose that the function q is integrable, i.e.
01|q(x)dx<|,

since the method requires the computation of Walsh-Fourier coefficients.

For every positive integer n the procedure of Chen and Hsiao gives us an unique Walsh polynomial of the form

y¯n(x)=k=02n1ckwk(x),
where wk is the kth Walsh function ordered in Paley’s sense. In [11] we proved that, with very rare exceptions, y¯n can always be constructed. Only if 2n+1 = −a we can not solve the linear system from which we obtain the coefficients of y¯n, and this only happens at most for one value of n in a given differential equation. It was also proved that y¯n converges uniformly to the exact solution y if n tends to infinity.

The Walsh polynomial y¯n is a piecewise constant function on intervals of length 12n. To obtain the coefficients of y¯n we solve a linear system of dimension 2n × 2n. This means that for a good accuracy in many cases we most solve a very large linear system. In [20] the block structure of the coefficient matrix is used to accelerate the computations, representing it by Quantum multiple-valued decision diagrams. However, in [11] we developed a multistep algorithm to obtain directly the values of y¯n. This allow us to do a really quick and efficient computation of the numerical solution.

In this paper we establish a similar method for solving numerically differential equations with not constant coefficient. Our aim is to design a new procedure to approach by Walsh polynomials the solution of a general linear differential equation of first-order having an initial condition at x = 0. The basic idea is the same, but the the complexity of the procedure increases and the analysis of the proposed numerical solution requires a solid mathematical background. However, we obtain excellent results which are compatible with those in [11]. We summarize these results and describe our new method in the next section.

Sections 3, 4 and 5 contain the necessary mathematical concepts and statements for the precise analysis of the method which is implemented in all its details in Section 6. In this section we also deal with the solvability of the linear system, in other words, with the existence of the proposed numerical solution. It turns out that there exists an unique numerical solution, except for finite numbers of n. In Section 7 we prove that the numerical solution converges uniformly to the exact solution of the initial value problem, as we state in Theorem 1.

In Section 8 we propose a multistep algorithm to speed up the computations. In this way we directly obtain the values of the numerical solution without needing to solve the linear systems and generate Walsh polynomials. Algorithms of this kind are used frequently for solving differential equations numerically. Among them we would like to mention the method developed by Lukomskii and Terekhin in [15] to approximate the derivative of the solution by step functions for solving first order linear Cauchy problem with continuous coefficient and free term on the close interval [0,1]. Our multistep algorithm only involves the integral means of these functions, so for us it is sufficient to suppose the continuity of them on the interval [0,1[.

Our method is illustrated in Section 10 through three examples. The first one solves a Cauchy problem with continuous coefficient and free term on the close interval [0,1]. The second example shows us the uniform convergence of the numerical solution in case of integrable coefficient and free term which are only continuous on the interval [0,1[. The third example illustrates how the multistep algorithm also works in case of not integrable functions. We discuss in detail the last case in Section 9.

2 Notation and main results

Consider the following initial value problem

y'+p(x)y=q(x),y(0)=η,
where p,q: [0,1[→ R are integrable and continuous functions, and ηR. Integration is to be understood in the sense of Lebesgue. It is very well known that the unique solution of this problem is given by the formula
y(x)=e0xp(t)dt(η+0xq(t)e0tp(s)dsdt)(0x<1)..

Our aim is to establish a procedure in order to approximate the solution by Walsh polynomials of the form

y¯n(x)=k=02n1ckwk(x)
where wk is the kth Walsh function ordered in Paley’s sense. In this regards, we discretize the equivalent integral equation
y(x)=η+0xq(t)p(t)y(t)dt(0_x<1)

replacing the functions that appear in it by the 2n-th partial sums of their Walsh-Fourier series. In other words, we find the Walsh polynomial yn satisfying the relation

y¯n(x)=η+S2n(0·S2nq(t)S2np(t)y¯n(t)dt)(x)

for all (0_x<1), where S2nf denotes the 2n-th partial sums of Walsh-Fourier series of the integrable function f. We would point out that the expression

S2n(0·S2nq(t)S2np(t)y¯n(t)dt)(x)

denotes the 2n-th partial sums of Walsh-Fourier series of the integral function

0xS2nq(t)S2np(t)y¯n(t)dt

at the point x.

The discretized integral equation (5) can be written as a linear system involving the variables c0,c1,...,c2n1 which are the coefficients of the Walsh polynomial yn. This requires the introduction of the following matrix notations. First, define the column vectors

c:=(c0,c1,...,c2n1)ande0:=(1,0,...,0),

or more precisely, let e0 be the first unit vector of size 2n. By I we denote the identity matrix of size 2n. Moreover, with the Fourier coefficients of the functions p and q (see Section 3) we define the column vector

q:^=(q^0,q^1,...,q^2n1),

and the matrix

P:=(p^ij)i,j=02n1,
where ij is the dyadic sum of the integers i and j (see Section 4). Finally, we also need the introduction of the integral functions of the Walsh-Paley functions, called triangular functions (see Section 5). These functions are denoted by Jk. With the Fourier coefficients of Jk we construct the matrix
J^:=(J^k,j)k,j=0,2n1

which is a matrix having a very special form (see Section 5).

With the matrix notations above the discretized integral equation (5) can be written as the following linear system

(J+J^P)c=ηe0+J^q^.

In Section 6 we prove that this linear system is solvable and it has an unique solution, except for finite numbers of n. The solution of the linear system gives us the coefficients c0,c1,…,c2n−1 of the numerical solution yn. In Section 7 we prove that yn converges uniformly to the exact solution of the initial value problem (1) on the interval [0,1[. In other words, we obtain the following result.

Theorem 1

Let p and q be two continuous and integrable functions defined on the interval [0,1[. Then there exists an unique Walsh polynomial y¯nwhich satisfies the discretized integral equation (5), except for finite numbers of n. Moreover, y¯n converges uniformly to the solution of the initial value problem (1) on the interval [0,1[, if n tends to infinity.

The numerical solution yn may be computed without solving the linear system much more quickly by a multistep algorithm. Section 8 contains how we design this algorithm.

The established procedure can be extended for not integrable functions p and q. We have two possibilities to do this. One is to modify the functions p and q limiting their values in a neighbourhood of the point x = 1 to be integrable. The modified initial value problem has the same exact solution outside this neighbourhood, so our method gives us a numerical solution, except in this neighbourhood. The other possibility is to use the multistep algorithm. It works with the exception of the last step, generating the numerical solution y¯on the interval [0,112n[. You find more details in Section 9.

3 The 2n-th partial sums of Walsh-Fourier series

First we introduce the concept of Walsh-Paley function. Every nN can be uniquely expressed as

n=k=0nk2k,
where nk = 0 or nk = 1 for all kN. This allows us to say that the sequence (n0,n1,…) is the dyadic expansion of n. Similarly, the dyadic expansion (x0,x1,…) of a real number x ∈ [0,1[ is given by the sum
x=k=0xk2k+1,
where xk = 0 or xk = 1 for all kN. This expansion is not unique if x is a dyadic rational, i.e. x is a number of the form i2k, where i,kN and 0i<2k. When this situation occurs we choose the expansion terminating in zeros. Define the dyadic sum of two numbers x,y ∈ [0,1[ with expansion (x0,x1,…) and (y0,y1,…) respectively by
x+y:=k=0|xkyk|2(k+1).

Walsh functions are the finite product of functions

rk(x):=(1)xk(x[0,1[,kN)

so-called Rademacher functions. If we sort the Walsh functions as

wn(x):=k=0rknk(x)(x[0,1[,nN),

then we obtain the Walsh-Paley system. wn is called the nth Walsh-Paley function or in other words, the nth Walsh function ordered in Paley’s sense.

A Walsh polynomial is the finite linear combination of Walsh functions. The Walsh-Paley system is orthonormal, i.e.

01wn(x)wm(x)dx={1,n=m,0,nm

Among other things, the orthonormality of the Walsh-Paley system ensures the fact that two Walsh polynomial are equal at every point if and only if they have the same coefficients.

In this paper we deal with real functions defined on the interval [0,1[. For an integrable function f, i.e.

01|f(x)|dx<,

we define the Fourier coefficients and partial sums of Fourier series by

f^k:=01f(x)wk(x)dx(kN)Snf(x):=k=0n1f^kwk(x)(nN,x[0,1[),

It is important to note that the 2n-th partial sums can be written as

S2nf(x)=2nIn(x)f(y)dy
where the sets
Ik(i):=[i12k,i2k[(i=1,...,2k)

are called dyadic intervals and In(x) denotes the dyadic interval which contains the value of x. S2nf converges to f in L1-norm for every integrable function f (see [18, p. 142]), meaning that

limn01|S2nf(x)f(x)|dx=0.

The integral above can be estimated by the dyadic L1 modulus of continuity of f. It is a sequence which tends to zero, defined by

ωn(1)f:=sup{01|f(x+h)f(x)|dx:x[0,1[,0¯h<2n}.

Indeed, it is not hard to prove (see [18]), that

01|S2nf(x)f(x)|dx¯ωn(1)f.

A continuous function defined on the interval [0,1[ does not necessarily have to be integrable. The function

f:[0,1[R,f(x)=11x

is a clear example of this. However, if in addition, the function has a finite limit from the left of 1, then it is integrable, since it can be extended to a continuous function on the interval [0,1]. In this case the function f has finite dyadic modulus of continuity, defined by

ωnf:=sup{|f(x+h)f(x)|:x[0,1[,0h<2n}

which tends to zero. It is not hard to prove (see [18]), that

|S2nf(x)f(x)|5ωnf

for all x ∈ [0,1[. In other words, S2nf converges to f uniformly on the interval [0,1[ for every continuous function f with finite limit from the left of 1. Note that there are integrable and continuous functions on the interval [0,1[ which do not have finite limit from the left of 1, as in the case of the function

f:[0,1[R,f(x)=11x,

When this happens, S2nf converges to f at every point of the interval [0,1[, but not uniformly.

The following lemma is especially important in the analysis of the numerical solution that we are establishing in this paper.

Lemma 1

Suppose f : [0,1[→ R is constant on the dyadic intervals of length 2n and xIn(i) for some i = 1,2,…,2n. Then

S2n(0·f(t)dt)(x)=12nk=1i1f(k12n)+12n+1f(i12n).

Proof. Let χIn(k) be the characteristic function of the interval In(k), where k = 1,2,…,2n, i.e.

χIn(k)(x):={1,k12n¯x<k2n,0,otherwise.

Thus,

0xχIn(k)(t)dt={0,0¯x<k12n,xk12n,k12n¯x<k2n12n,k2n¯x<1,,

and therefore

S2n(0·χIn(k)(t)dt)(x)={0,0¯x<k12n,12n+1,k12n¯x<k2n12n,k2n¯x<1,,

Since f is constant on the dyadic intervals of length 2n we have

f(x)=k=12nf(k12n)χIn(k)(x)

for all x ∈ [0,1[. For this reason, if xIn(i) we have

S2n(0·f(t)dt)(x)=k=12nf(k12n)S2n(0·χIn(k)(t)dt)(x)=k=1i1f(k12n)12n+f(i12n)12n+1+k=i+12nf(k12n).0

from which we obtain the statement of the lemma.

4 Dyadically circulant matrices

Define the dyadic sum of a pair of non-negative integers i and j by

ij:=k=0|ikjk|2k,

where (i0,i1,…) and (j0,j1,…) are the dyadic expansion of the integers i and j respectively. For all positive integer n the set {0,1,…,2n − 1} with the dyadic sum forms a group.

A square matrix A of size 2n is called a dyadically circulant matrix (see

[13]) if all its rows can be obtained from the elements a0,a1,…,a2n−1 in its first row by the dyadic sum, satisfying the following rule

ai,j=aij(i,j=0,1,...,2n1),
where ai,j is the entry in the i-th row and j-th column of A. In this way, the matrix has the following form
A=(a0a1a2a3a2n4a2n3a2n2a2n1a1a0a3a2a2n3a2n4a2n1a2n2a2a3a0a1a2n2a2n1a2n4a2n3a3a2a1a0a2n1a2n2a2n3a2n4a2n4a2n3a2n2a2n1a0a1a2a3a2n3a2n4a2n1a2n2a1a0a3a2a2n2a2n1a2n4a2n2a2a3a0a1a2n1a2n2a2n3a2n4a3a2a1a0)

We also say that A is the dyadically circulant matrix generated by the numbers a0,a1,…,a2n−1.

By the concept of the dyadic sum, it is not hard to see the fact that A is the dyadically circulant matrix generated by the numbers a0,a1,…,a2n−1 if and only if A can be partitioned as follows

A=(A0A1A1A0)
where A0 and A1 are the dyadically circulant matrix generated by the numbers a0,a1,,a2n11and a2n1,a2n1+1,,a2n1 respectively. This means that every dyadically circulant matrix is symmetric and therefore diagonalizable. More precisely, the dyadically circulant matrix A can be written by the form
A=CDC1,
where D is a diagonal matrix made up of the eigenvalues of A and C is an orthogonal matrix made up of the eigenvectors of A. The following lemma gives us the diagonalization of dyadically circulant matrices.

Lemma 2

Let A the dyadically circulant matrix generated by the numbers a0,a1,…,a2n−1 and consider the Walsh polynomial

a(x)=j=02n1ajwj(x)(x[0,1[).

Thus, the eigenvalues of the matrix A are given by the formλk=a(k2n) for all k = 0,1,…,2n − 1 and the eigenvector which corresponds to the eigenvalue λk is

wk=(wk(0),wk(12n),...,wk(2n12n)).

Proof. We prove that

Awk=λkwk

for all k = 0,1,…,2n − 1, from which the lemma directly follows. In this regard we prove that the ith element of the vectors Awk and λkwk are the same. Indeed

j=02n1ai,jwk(j2n)=j=02n1aijwk(j2n)=j=02n1arwk(ri2n)=r=02n1arwk(r2n)wk(i2n)=r=02n1arwk(k2n)wk(i2n)=a(k2n)wk(i2n)=λkwk(i2n)

In the computation above we use the substitution r = ij and the fact that the set {0,1,…,2n −1} is a group under the dyadic sum. Moreover, we also use the elementary properties of the Walsh-Paley functions (see [18]). This completes the proof of the lemma.

The lemma above tell us that if the dyadically circulant matrix A is generated by the coefficients of the Walsh polynomial

a(x)=j=02n1ajwj(x)(x[0,1[),
, where then A can be written as A = WDaW−1
Da=(a(0)0000a(12n)0000a(22n)0000a(2n12n))

is a diagonal matrix and

W=(w0(0)w1(0)w2(0)w2n1(0)w0(12n)w1(12n)w2(12n)w2n1(12n)w0(22n)w1(22n)w2(22n)w2n1(22n)w0(2n12n)w1(2n12n)w2(2n12n)w2n1(2n12n)).

W is called the Hadamard matrix of size 2n with respect to the Walsh-Paley system. It is not hard to see that the matrix W is symmetric and

W1=12nW.

This is due to the orthonormality of the Walsh-Paley system.

The following lemma is obtained directly from the diagonalization of dyadically circulant matrices and the proof is elementary linear algebra.

Lemma 3

. Let A and B be the dyadically circulant matrices generated by the coefficients of the Walsh polynomials

a(x)=j=02n1ajwj(x)andb(x)=j=02n1bjwj(x)

respectively, and α,βR. Then the set of dyadically circulant matrices of the same size is a commutative algebra. Moreover,

  • αA+βB is the dyadically circulant matrix generated by the coefficients of the Walsh polynomial αa(x) + βb(x).
  • AB is the dyadically circulant matrix generated by the coefficients of the Walsh polynomial a(x)b(x) (and therefore AB = BA).
  • .detA=k=02n1a(k2n).

We apply the results of Lemma 3 to calculate the determinant of a matrix related to the Fourier coefficients of triangular functions.

5 The triangular functions

Triangular functions are the set of integral functions of the Walsh-Paley functions. We denote them by

Jk(x):=0xwk(t)dt(kN,0¯x<1).

Let us consider the Walsh-Fourier series of the triangular functions Jk, denoting the Fourier coefficients of them by J^k,j, and hence

Jk(x)=j=0J^k,jwj(x).

Coefficients J^k,j often take the value 0. We can find the exact calculation of these values in [11] directly by the Fine’s formulae (see [9]). With them we construct in an easier way the matrices J^(n) whose entries are J^k,j, where k,j = 0,1,…,2n − 1:

where Ij and 0j are the identity and null matrix of size j. Note that the matrix above is almost skew-symmetric, more precisely J^k,j(n)=J^k,j(n)ifk2+j20

For example

J^(3)=(121418011600014001801160018000001160018000001161160000000011600000000116000000001160000).

Note that the matrices J^(n)can be also constructed by the iteration

J^(0)=(12),J^(n)=(J^(n1)12n+1J2n112n+1J2n102n1).

Matrices J^(n) are involved in the linear system with which we obtain the numerical solution of the studied initial value problem. We will discuss it in the next section, but we must prove first an interesting relationship between J^(n)and dyadically circulant matrices. In this regard we prove first the following lemma.

Lemma 4

For all positive integer n we have

W1J^(n)W=(12n+100012n12n+10012n12n12n+1012n+112n+112n+112n+1).

Proof. We compute directly the entry aij of the matrix W1J^(n)TW, proving that

aij={0i<j,12n+1i=j,12ni>j,

By the definition of the coefficients

J^l,k=010xwl(t)dtwk(x)dx

and by the fact that W is a symmetric and orthogonal matrix such that W1=12nWholds, using the elementary properties of the Walsh-Paley functions (see [18]) we have

aij=12nk=02n1l=02n1wk(i2n)J^l,kwl(j2n)=12nk=02n1l=02n1wk(i2n)010xwl(t)dtwk(x)dxwl(j2n)=12nk=02n1l=02n1010xwl(t+˙j2n)dtwk(x+˙j2n)dx=12n010xD2n(t+˙j2n)dtD2n(x+˙j2n)dx
where
D2n(x):=k=02n1wk(x)(x[0,1[)

is called the Dirichlet kernels of order 2n. This function has the following property (see [18])

D2n(x)={2n,0¯x<12n0,12n¯x<1,

from which we have

0xD2n(t)dt={2nx,0¯x<12n1,12n¯x<1.

By the translation invariance of the dyadic sum and (10) we continue the calculation of the entry aij as follows

aij=12n010x+˙i2nD2n(t+˙j2n)dtD2n(x)dx=012n0x+˙i2nD2n(t+˙j2n)dtdx.

Note thatx+˙i2n=x+i2nifx<12n. For this reason we can decompose the

integrals above in the following way

aij=012n0x+i2nD2n(t+˙j2n)dtdx=012nr=0i10r+12nD2n(t+˙j2n)dtdx+012ni2nx+i2nD2n(t+˙j2n)dtdx=012nr=0i1r2n+˙j2n(r+12n+˙r+12n)+12nD2n(t)dtdx+012ni2n+˙j2n(r+12n+˙r+12n)+xD2n(t)dtdx:=J1+J2.

We obtain immediately from (10) that

r2n+˙j2n(r2n+˙j2n)+12nD2n(t)dt={1,r=j,0,rj.

Hence

J1={12n,i>j,0,i¯j.

Similarly, J2 is not zero only if i = j. In this case by (11) we obtain

012ni2n+˙i2n(i2n+˙i2n)+xD2n(t)dtdx=012n0xD2n(t)dtdx=012n2nxdx=12n+1.

Therefore,

J2={12n+1,i=j,0,ij.

Finally, we obtain (9) with the addition of the results obtained for J1 and

J2, which completes the proof of the lemma. From the lemma above we obtain the following result.

Lemma 5Let A and B be the dyadically circulant matrices generated by the coefficients of the Walsh polynomials

a(x)=j=02n1ajwj(x)andb(x)=j=02n1bjwj(x)

respectively. Then

det(A+J^(n)B)=i=02n1(a(i2n)+12n+1b(i2n)).

Proof. By the diagonalization of the matrices A and B we obtain

det(A+J^(n)B)=det(WDaW1+WW1J^(n)WDbW1)=det(W(DaW1+W1J^(n)WDb)W1).=det(Da+W1J^(n)WDb)

Since by Lemma 4 the matrix W1J^(n)W is triangular, then the matrix

Da+W1J^(n)WDb

Lemma 5

holds that is also triangular and the entries in the diagonal are exactly the numbers in the product of the formula which we have to prove. This means that the determinant of the matrix above is the product of these numbers, which implies .

Fig. 1.
Fig. 1.

The triangular function J10

Citation: Studia Scientiarum Mathematicarum Hungarica Studia Scientiarum Mathematicarum Hungarica 57, 2; 10.1556/012.2020.57.2.1459

6 The existence of the numerical solution

As we have said before, the method consists of discretizing the integral equation

y(x)=η+0xq(t)p(t)y(t)dt(0¯x<1)

substituting all functions in them by the 2 th partial sums of Walsh series of these functions. The solution y is also substituted by the unknown Walsh polynomial

y¯n(x)=k=02n1ckwk(x),

Our aim is to find the Walsh polynomial yn which satisfies the discretized integral equation

y¯n(x)=η+S2n(0·S2nq(t)S2np(t)y¯n(t)dt)(x),
where wk is the kth Walsh-Paley function and operators S2nf are the 2n-th partial sums of Walsh-Fourier series of the integrable function f. Remember, we suppose that p and q are continuous and integrable functions defined on the interval [0,1[.

Let us recall the matrix notation introduced in Section 2:

c:=(c0,c1,...,c2n1)e0:=(1,0,...,0)withsize2n,q^:=(q^0,q^1,...,q^2n1)andJ^:=(J^k,j)k,j=02n1,P:=(P^ij)i,j=02n1.

Note that P is the dyadically circulant matrix generated by the coefficients of S2np(x). We also introduce the notation

w(x):=(w0(x),w1(x),...,w2n1(x))

for all x ∈ [0,1[. With these matrix notations the discretized integral equation can be written as follows

w(x)c=η+S2n(0·w(t)q^w(t)P^w(t)cdt)(x)=η+S2n(0·w(t)q^w(t)Pcdt)(x)=w(x)ηe0+S2n(0·w(t)dt)(x).(q^pc)=w(x)ηe0+w(x)J^(q^pc)=w(x)(ηe0+J^(q^pc)).

at every point of [0,1[. In the equation above we used the following result

w(t)p^w(t)c=k=02n1p^kwk(t)j=02n1cjwj(t)k,j=02n1p^kcjwk(t)wj(t)=k,j=02n1p^kcjwkj(t)=i,j=02n1p^ijcjwi(t)=w(t)Pc

and in addition we use the fact that P=P, since each dyadically circulant matrix is symmetric. The equality of the Walsh polynomials

w(x)c=w(x)(ηe0+J^(q^Pc))

obtained in (12) implies that they have the same coefficients, i.e.

c=ηe0+J^(q^Pc)

which is a linear system involving the variables c0,c1,…,c2n−1. This linear system can be written as follows

(J+J^P)c=ηe0+J^q^,
where I is the identity matrix of size 2n.

The solvability of the linear system (13) only depends on whether the value of det(J+J^P) is zero or not. Lemma 5 give us the answer, since if

A = I and B = P then we directly obtain

det((J+J^P)c=ηe0+J^q^,+J^P)=i=02n1(1+12n+1S2np(i2n)).

For this reason, the linear system (13) has an unique solution given by the formula

c=(J+J^P)1(ηe0+J^q^)

ifS2np(i2n)2n+1 for all i = 0,1,…,2n −1. Otherwise, the linear system is not solvable. However, the assumption that the function p is integrable means that the integral function

F(x)=0xp(t)dt

is absolute continuous on the closed interval [0,1]. From this fact it immediately follows that

limnmax0¯i<2n{12n|S2np(i2n)|}=0.

This means that S2np(i2n)=2n+1 is only possible for finite numbers of n and i. This leads us to conclude that the linear system (13) is solvable, except for a finite number of n. This proves the first part of Theorem 1.

7 The uniform convergence of the numerical solution

In order to analyze the convergence we deal with the upper estimation of the absolute difference between the exact solution and the numerical solution of the problem (1) for every point x ∈ [0,1[. It will be established in two steps according to

|y(x)y¯n(x)|¯|y(x)S2ny(x)|+|S2ny(x)y¯n(x)|

First, let us note that by (2) the solution y of the Cauchy problem (1) can be extended continuously to the closed interval [0,1], since the integrability of the function p and q ensures that the limit

limx1y(x)=e01p(t)dt(η+01q(t)e01p(s)dsdt)

is finite. This means that the solution y has finite modulus of continuity and

|y(x)S2ny(x)|¯ωny

for all x ∈ [0,1[ (see [18]). Therefore, the first addend of the right hand side of (16) tends uniformly to zero.

For the estimation of the second addend we introduce the function

zn(x):=y¯n(x)S2ny(x)(x[0,1[).

Thus, by (5) and (4) for all x ∈ [0,1[ we obtain

zn(x)=η+S2n(0·S2nq(t)S2np(t)y¯n(t)dt)(x)S2n(η+0·q(t)p(t)y(t)dt)(x)=S2n(0·S2nq(t)q(t)dt)(x)S2n(0·S2np(t)p(t)y(t)dt)(x)+S2n(0·S2np(t)y(t)S2ny(t)dt)(x)S2n(0·S2np(t)zn(t)dt)(x)

For simplicity we use the notation

mn(x):=S2n(0·S2nq(t)q(t)dt)(x)S2n(0·(S2np(t)p(t))y(t)dt)(x)+S2n(0·S2np(t)(y(t)S2ny(t))dt)(x)

therefore

zn(x)=mn(x)S2n(0·S2np(t)zn(t)dt)(x).

The functions S2np, mn and zn are constants on the dyadic intervals

In(i) for all i = 1,2,…,2n. Hence by Lemma 1 we have

S2n(0·S2np(t)zn(t)dt)(x)=12nk=1i1S2np(k12n)zn(k12n)+12n+1S2np(i12n)zn(i12n),

if xIn(i). Thus, by (18) we have

zn(i12n)=mn(i12n)12nk=1i1S2np(k12n)zn(k12n)12n+1S2nP(i12n)zn(i12n),

for all i = 1,2,…,2n, which can be written as

(1+12n+1S2np(i12n))zn(i12n)=mn(i12n)12nk=1i1S2np(k12n)zn(k12n)

By (15) the sequence12nS2np(x) tends to zero uniformly on the interval [0,1[.

For this reason there exists an n0N such that

|12n+1S2np(k2n)|<12

if n > n0 for all k = 0,1,…,2n − 1, and in this case 1 +12n+1S2np(k2n)0

Thus, we can define the numbers

ρk(n):=12nS2np(k2n)1+12n+1S2np(k2n)(k=0,1,...,2n1).

From the formula (19) it is not difficult to prove that

(1+12n+1S2np(i12n))zn(i12n)=mn(i12n)k=1i1ρk1(n)mn(k12n)j=k+1i1(1ρj1(n))

for all i = 1,2,…,2n. Indeed, both expressions give the same value for i = 1, that is

(1+12n+1S2np(0))zn(0)=mn(0),

and supposing that (20) holds for all numbers 1,2,…,i − 1 we obtain from (19) that

(1+12n+1S2np(i2n))zn(i2n)=mn(i2n)12ns=1iS2np(s12n)zn(s12n)=mn(i2n)s=1i(12n)S2np(s12n)11+12n+1S2np(s12n)×(mn(s12n)k=1s1ρk1(n)(k12n)j=k+1s1(1ρj1(n)))=mn(i2n)s=1iρs1(n)(mn(s12n)k=1s1ρk1(n)mn(k12n)j=k+1s1(1ρj1(n)))+s=1ik=1s1ρk1(n)mn(k12n)j=k+1s1(1ρj1(n))ρs1(n)=mn(i2n)k=1iρk1(n)mn(k12n)+s=1ik=1s1ρk1(n)mn(k12n)j=k+1s1(1ρj1(n))ρs1(n)=mn(i2n)k=1iρk1(n)mn(k12n)(1s=k+1ij=k+1s1(1ρj1(n))ρs1(n))=mn(i2n)k=1iρk1(n)mn(k12n)j=k+1s1(1ρj1(n)).

which means that (20) holds for i. Therefore, (20) holds for all i = 1,2,…,2n. In the calculations above we use the equality

1s=k+1ij=k+1s1(1ρj1(n))ρs1(n)=j=k+1i(1ρj1(n))

which can be easily proved by using iteratively the transformation

1s=k+1ij=k+1s1(1ρj1(n))ρs1(n)=1ρk(n)s=k+1ij=k+1s1(1ρj1(n))ρs1(n)=1ρk(n)(1ρk(n))s=k+2ij=k+2s1(1ρj1(n))ρs1(n)=(1ρk(n))(1s=k+2ij=k+2s1(1ρj1(n))ρs1(n)).

To estimate the absolute value of zn we deal first with the function mn.

Note that for all x ∈ [0,1[ we have

|S2n(0·S2nq(t)q(t)dt)(x)|¯S2n(0·|S2nq(t)q(t)|dt)(x)¯01|S2nq(t)q(t)|dt¯ωn(1)qand|S2n(0·(S2np(t)p(t))y(t)dt)(x)|¯S2n(0·|S2np(t)p(t)y(t)|dt)(x)¯0·|S2np(t)p(t)y(t)|dt¯yωn(1)p,
where y:= sup{|y(x)|: x ∈ [0,1[} which is finite since y is a bounded function on [0,1[, whileωn(1) p and ωn(1) q denote the dyadic L1 modulus of

continuity of the functions p and q respectively. Moreover,

j12nj2nS2ny(t)y(t)dt=0(j=1,2,...,2n)

and S2np is constant on the all dyadic intervals In(j). Hence, if xIn(i) we have

S2n(0·S2np(t)(y(t)S2ny(t))dt)(x)=2ni12ni2n0τS2np(t)(y(t)S2ny(t))dtdτ=2ni12ni2ni12nτS2np(t)(y(t)S2ny(t))dtdτ=2nS2np(x)i12ni2ni12nτy(t)S2ny(t)dtdτ

and then

|S2n(0·S2np(t)(y(t)S2ny(t))dt)|(x)¯2n|S2np(x)|i12ni2ni12nτ|y(t)S2ny(t)|dtdτ¯2n|S2np(x)|i12ni2ni12ni2n|y(t)S2ny(t)|dtdτ¯max0¯i<2n{12n|S2np(i2n)|}ωny.

Summarizing our results we have that |mn(x)| Mn for all x ∈ [0,1[, where

Mn:=ωn(1)q+yωn(1)p+max0¯i<2n{12n|S2np(i2n)|}ωny

which by (15) tends to zero if n → ∞. Thus, by (20) we obtain

|1+12n+1S2np(i12n)||zn(i12n)|¯Mn+k=1i1|ρk1(n)|Mnj=k+1i1(1+|ρj1(n)|)=Mn(k=1i1|ρk1(n)|j=k+1i1(1+|ρj1(n)|))=Mnk=1i1(1+|ρk1(n)|).

In the calculations above we use the equality

1+k=1i1|ρk1(n)|j=k+1i1(1+|ρj1(n)|)=k=1i1(1+|ρk1(n)|)

which can be easily proved by using iteratively the transformation

1+j=k=1i1|ρk1(n)|j=k+1i1(1+|ρj1(n)|)=1+|ρi2(n)|+k=1i2|ρk1(n)|j=k+1i1(1+|ρj1(n)|)=1+|ρi2(n)|+(1+|ρi2(n)|)k=1i2|ρk1(n)|j=k+1i2(1+|ρj1(n)|)=(1+|ρi2(n)|)(1+k=1i2|ρk1(n)|j=k+1i2(1+|ρj1(n)|)).

We know that

|12n+1S2np(k2n)|<12

for all k = 0,1,…,2n − 1, if n > n0. Hence,

1+12n+1S2np(i12n)>12,|ρk(n)|<12n1|S2np(k2n)|.

Therefore,

12|zn(i12n)|¯Mnk=1i1(1+12n1|S2np(k12n)|)=Mnk=1i1(1+2|k12nk2np(x)dx|)¯Mnk=1i1(1+2k12nk2n|p(x)|dx),

if n > n0. Thus, for i = 1 we have |zn(0)| = 2Mn and by the inequality of arithmetic and geometric means we obtain

|zn(i12n)|¯2Mn(1i1k=1i1(1+2k12nk2n|p(x)|dx))i1=2Mn(1+2i10i12n|p(x)|dx)i1¯2Mn(1+2i101|p(x)|dx)i1<2Mne201|p(x)|dx

for all i = 2,3,…,2n. Therefore,

|zn(x)|¯2Mne201|p(x)|dx(x[0,1[),

which tends to zero as n → ∞, meaning that the second part of (16) also tends uniformly to zero. Thus, the absolute difference

|y(x)y¯n(x)|

Theorem 1the second part of is true ends uniformly to zero, hence .

8 A multistep algorithm for the numerical solution

The disadvantage of our method is the requirement for the solution of a liner system with a very large number of equations. In addition, we must construct a Walsh polynomial with the coefficient that appear in the solution. The amount of time required for these computations would really be large if we try to obtain a high accuracy. In this section we propose a faster method for directly getting the values of the numerical solution without needing to solve the linear systems and generate Walsh polynomials. The method is based on the fact that the numerical solution yn is constant on the dyadic intervals of length 2n, i.e.

y¯n(x)=y¯n(i12n)(xIn(i))

for all i = 1,2,…,2n. The point is to calculate the value ofy¯n(i12n) from the previous valuesy¯n(k2n), where k = 0,1,…,i−2, starting from the value of y¯n(0). It is called a multistep algorithm and this kind of algorithms are frequently used to solve differential equations numerically.

In order to design the algorithm note that the function S2nq(x)S2np(x)y¯n(x) is constant on the dyadic intervals of length 2n. Thus, by (5) and Lemma 1 we obtain

y¯n(x)=η+S2n(0·S2nq(t)S2np(t)y¯ndt)(x)=η+12nk=1i1(S2nq(k12n)S2np(k12n)y¯n(k12n))+12n+1(S2nq(i12n)S2nq(i12n)y¯n(i12n))

if xIn(i) for all i = 1,2,…,2n. Note also thatyn(x)=yn(i12n). After solving the formula above fory¯n(i12n) we obtain the multistep algorithm

y¯n(i12n)=11(η+12nk=1i1(S2nq(k12n)S2np(k12n)y¯n(k12n)+12n+1S2nq(i12n))+12n+1S2np(i12n)

which starts from i = 1 taking the value

y¯n(0)=11+12n+1S2np(0)(η+12n+1S2nq(0)).

Observe that for the multistep algorithm we only need the 2nth partial sums of Walsh series of the functions p and q which are just integral means. Hence they are more simple to compute than the Walsh-Fourier coefficients that appear in the linear system. Note also that unlike others multistep algorithms the value of y¯n(0) does not necessarily have to be η, but it tends to η if n → ∞.

9 The extension of the method for not integrable functions

The assumption of integrability for the functions p and q is essential to implement our method, since the Walsh-Fourier coefficients of p and q are needed for the linear system, on the other hand, the partial sums S2np and S2nq are required for the multistep algorithm. What we may do in case that p and q are not integrable, but continuous functions on the interval [0,1[? In this regard consider the initial value problem

y'+p(x)y=q(x),y(0)=η,
where p,q: [0,1[→ R are continuous functions and ηR, but p or q is not integrable on [0,1]. The situation is quite different now, since the solution y would be unbounded. In this case a numerical solution formed by Walsh polynomials can not converge to y. But the continuity ensures that for all 0 < α < 1 numbers p and q are integrable on [0]. For this reason, we propose to modify the initial value problem (21) as follows. Let 0 < α < 1 be a fixed number, and define
p(x):={p(x),0¯x<α,p(α),α¯x<1,q(x):={q(x),0¯x<α,q(α),α¯x<1.

Now consider the modified initial value problem

y'+p(x)y=q(x),y(0)=η.

Note that the functions p and q are continuous and integrable, hence Theorem 1 is valid for the modified problem above. The uniqueness of the solution of a general initial value problems for linear differential equations implies that the original and the modified initial problem have the same solution on the interval [0[. Therefore, the procedure implemented for the modified problem gives us a numerical solution yn which converges uniformly to the exact solution of the original problem on the interval [0[. Consequently, if we like to obtain an approximation of the solution at a point x ∈ [0,1[ we may take a value of α greater than x and numerically solve the modified initial value problem by our method.

However, in practice it is better to use the multistep algorithm. It is usable since the continuity of the functions p and q ensures that the integrals

S2np(i12n)=2ni12ni2np(t)dt,S2nq(i12n)=2ni12ni2nq(t)dt

exist for all i = 1,2,…,2n −1. For i = 2n the integrals above may be calculated only if the functions p and q are integrable on the whole interval [0,1]. This means that the multistep algorithm can always be implemented, except the last step.

In case thatα=112n we have

S2np(i12n)=S2np(i12n)S2nq(i12n)=S2nq(i12n)

for all 1 i 2n − 1. Consequently, the multistep algorithm generates the same solution for the original and the modified initial problem on the interval [0,112n[The remaining part, i.e. the interval, is of length12n which tends to zero if n tends to infinity. This allows us to say, that the numerical solution y¯n generated by the multistep algorithm converges to the exact solution y¯ of the original problem at every point of the interval [0,1[. We mean that for every x ∈ [0,1[ we can find an n0 positive integer such that x is in the domain of y¯n for all n = n0 and yn(x) → y(x). The convergence is uniform on all closed subinterval of [0,1[.

10 Examples

In the first instance, consider the problem

y'+tan(x)y=sin2x,y(0)=2.

Note that the functions p(x) = tanx and q(x) = sin2x are both continuous on the interval [0,1], therefore they are integrable with a finite limit from the left of 1. In this case the numerical solutions y¯n converge uniformly to the exact solution of the problem which is

y(x)=4cosx2cos2x.

Figure 2 illustrates how close is the numerical solution y¯4 to the solution y. Moreover, Table 1 show us how fast the convergence is. We can see that the supremum of the absolute difference between the solution and the numerical solution is reduced almost by half if the value of n increases by one. We made all computations in this section by Maple with the multistep algorithm.

Fig. 2.
Fig. 2.

y¯4, the numerical solution of the problem (23) for n = 4

Citation: Studia Scientiarum Mathematicarum Hungarica Studia Scientiarum Mathematicarum Hungarica 57, 2; 10.1556/012.2020.57.2.1459

Table 1.

sup|y(x) − y¯n(x)| on the dyadic intervals of length18 for (23)

n0¯x<1818¯x<1414¯x<3838¯x<1212¯x<5858¯x<3434¯x<7878¯x<1
30.000060960.000907790.003876210.010202530.020860890.036475530.057251020.08291886
40.000057100.000660040.002487920.006129090.012028880.020449790.031442240.04482592
50.000041610.000398790.001407920.003354840.006451150.010815440.016460180.02328479
60.000025170.000219010.000748470.001754290.003339430.005560000.008419120.01186406
70.000013840.000114730.000385820.000896910.001698760.002818650.004257340.00598791
80.000007250.000058710.000195860.000453460.000856720.001419060.002140680.00300798
90.000003710.000029700.000098680.000227990.000430200.000711970.001073350.00150751
100.000001880.000014940.000049520.000114310.000215560.000356600.000537430.00075463

The second example shows us a problem with integrable functions p and q, but one of them does not have a finite limit from the left of 1. Consider the problem

y'+321xy=1x1x,y(0)=23.

In this case

p(x)=321xq(x)=1x1x

which are both integrable even though the limit from the left of 1 of the function p is infinite. Theorem 1 is also valid in this case, therefore the numerical solutions y¯n also converge uniformly to the exact solution of the problem which is

y(x)=23(1x)3.

The behavior of the convergence is similar to the previous problem. We can also see in the following Table 2 that the supremum of the absolute difference between the solution and the numerical solution is reduced almost by half if the value of n increases by one.

Table 2.

sup|y(x) − y¯n(x)| on the dyadic intervals of length for (24)

n0¯x<1818¯x<1414¯x<3838¯x<1212¯x<5858¯x<3434¯x<7878¯x<1
30.060622990.056674390.052382330.047657930.042358940.036231650.028729800.01863694
40.030771550.028777000.026621680.024263820.021638040.018631240.015012730.01011927
50.015504180.014501200.013420510.012241830.010933720.009442550.007661540.00530135
60.007782140.007279140.006737950.006148560.005495560.004752790.003868760.00270748
70.003898640.003646750.003375930.003081210.002754960.002384250.001943780.00136753
80.001951220.001825180.001689710.001542340.001379270.001194080.000974230.00068717
90.000976090.000913040.000845290.000771610.000690090.000597530.000487700.00034443
100.000488160.000456630.000422750.000385910.000345150.000298890.000244000.00017242
Table 3.

Value of |y(x) − y¯n(x)| in some points for (25)

nx=12x=34x=78x=1516x=3132x=6364x=127128x=255256
30.029930020.13766372
40.012085840.057763040.11082398
50.005431470.026675400.049935590.06924819
60.002574830.012834960.023874200.032065570.03856926
70.001253600.006297030.011686690.015531230.018075790.02033629
80.000618520.003119010.005783150.007651580.008805010.009584840.01043952
90.000307210.001552210.002876810.003798470.004349960.004681360.004933850.00528868
100.000153100.000774290.001434740.001892540.002162430.002315750.002412860.00250288

Finally we deal with the case that the functions p and q are not integrable. Consider the problem

y'+51xy=5x41x,y(0)=0.

which has the solution

y(x) = x5.

The coefficient and free term are

p(x)=51xq(x)=5x41x.

Since p and q are not integrable functions on the interval [0,1], then Theorem 1 is not valid, but we may obtain a numerical solution y¯n on the interval [0,112n[ by the multistep algorithm. In this case we illustrate the convergence choosing some point of the interval [0,1[ and computing the absolute difference between the solution and the numerical solution at these points. The results appear in Table 3.

Observe that in this case, the method only works for large values of n if x is close to 1. On the other hand, we can see that in this example the absolute difference between the solution and the numerical solution is also reduced almost by half if the value of n increases by one.

11 Conclusion

The method designed by Chen and Hsiao for solving systems of linear differential equations with constant coefficients may be extended in case of one equations with not necessarily constant coefficients. The proposed numerical solution is a Walsh polynomial, i.e. a piecewise constant function on intervals of length 12n. The approach to the problem was as general as possible, considering the function in the differential equation continuous and integrable on the interval [0,1[. In this case the numerical solution always exits, except for finite positive integers n, and it converges uniformly to the exact solution. The disadvantage of the method by needing to solve very large linear systems may be avoided by the use of a multistep algorithm.

With a few modifications the method also works in case that the function in the differential equation are continuous, but not integrable on the interval [0,1[. The multistep algorithm gives us a numerical solution on the interval [0,112n[ in this instance. This allows us to approximate the value of the exact solution at every point of the interval [0,1[, but the uniform convergence is not true in general.

Acknowledgement.

The first author is supported by the projects EFOP- 3.6.1-16-2016-00022 and EFOP-3.6.2-16-2017-00015 supported by the European Union, co-financed by the European Social Fund. The second author is supported by the project GINOP-2.2.1-15-2017-00055.

References

  • [1]

    Blachman, N. M., Sinusoids versus Walsh functions, Proceedings of the IEEE, 62(3 ) (1974), 346354.

  • [2]

    Chen, C. F. and Hsiao, C. H., A state-space approach to Walsh series solution of linear systems, International Journal of Systems Science, 6(9 ) (1975), 833858.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [3]

    Chen, C. F. and Hsiao, C. H., A Walsh series direct method for solving variational problems. Journal of the Franklin Institute, 300(4 ) (1975), 265280.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [4]

    Chen, C. F. and Hsiao, C. H., Design of piecewise constant gains for optimal control via Walsh functions, IEEE Transactions on Automatic Control, 20(5 ) (1975), 596603.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [5]

    Chen, C. F. and Hsiao, C. H., Time-domain synthesis via walsh functions, in: Proceedings of the Institution of Electrical Engineers, volume 122, pages 565570. IET, 1975.

    • Search Google Scholar
    • Export Citation
  • [6]

    Chen, C. F. and Hsiao, C. H., Walsh series analysis in optimal control, International Journal of Control, 21(6 ) (1975), 881897.

  • [7]

    Chen, W.-L. and Shih, Y.-P., Shift walsh matrix and delay-differential equations, IEEE Transactions on Automatic Control, 23(6 ) (1987), 10231028.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [8]

    Corrington, M., Solution of differential and integral equations with Walsh functions, IEEE Transactions on Circuit Theory, 20(5 ) (1973), 470476.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [9]

    Fine, N. J., On the Walsh functions, Trans. Am. Math. Soc., 65 (1949), 372414.

  • [10]

    GÁT, Gy. and Toledo, T., A numerical method for solving linear differential equations via Walsh functions, in: Advances in Information Science and Applications, volume 2, pages 334339. Proceedings of the 18th International Conference on Computers (part of CSCC 2014), Santorini Island, Greece, July 17- 21, 2014, 2014.

    • Search Google Scholar
    • Export Citation
  • [11]

    GÁT, Gy . and Toledo, R., Estimating the error of the numerical solution of linear differential equations with constant coefficients via Walsh polynomials, Acta Math. Acad. Paedagog. Nyházi. (N.S.), 31(2) (2015), 309330.

    • Search Google Scholar
    • Export Citation
  • [12]

    Gibbs, J. E. and Gebbie, H. A., Application of Walsh Functions to Transform Spectroscopy, Nature, 224 (1969), 10121013.

  • [13]

    Gulamhusein, M. N., Simple matrix-theory proof of the discrete dyadic convolution theorem, Electronics Letters, 9(10 ) (1973), 238239.

  • [14]

    Harmuth, H. F., Transmission of information by orthogonal functions, 2nd ed., Berlin-Heidelberg-New York: Springer-Verlag. XII, 393 p., 1972.

  • [15]

    Lukomskii, D. S., Lukomskii, S. F. and Terekhin, P. A., Solution of Cauchy problem for equation first order via Haar functions, Izv. Saratov Univ. (N.S.), Ser. Math. Mech. Inform., 16 (2016), 151159.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [16]

    Ohta, T., Expansion of Walsh Functions in Terms of Shifted Rademacher Functions and Its Applications to the Signal Processing and the Radiation of Electromagnetic Walsh Waves, IEEE Transactions on Electromagnetic Compatibility, EMC- 18 (1976), 201205.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [17]

    Rao, G. P., Piecewise constant orthogonal functions and their application to systems and control., volume 55, Springer, Cham, 1983.

  • [18]

    Schipp, F., Wade, W. R. and Simon, P., Walsh series. An introduction to dyadic harmonic analysis, Adam Hilger, Bristol and New York, 1990.

    • Search Google Scholar
    • Export Citation
  • [19]

    Shih, Y.-P. and Han, J,-Y., Double walsh series solution of first-order partial differential equations, International Journal of Systems Science, 9(5 ) (1978), 569578.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [20]

    Stankovic, R. S. and Miller, D. M., Using QMDD in numerical methods for solving linear differential equations via Walsh functions, in: 2015 IEEE International Symposium on Multiple-Valued Logic, pages 182188, May 2015.

    • Search Google Scholar
    • Export Citation
  • [21]

    Walsh, J. L., A closed set of normal orthogonal functions, Am. J. Math., 45 (1923), 524.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1]

    Blachman, N. M., Sinusoids versus Walsh functions, Proceedings of the IEEE, 62(3 ) (1974), 346354.

  • [2]

    Chen, C. F. and Hsiao, C. H., A state-space approach to Walsh series solution of linear systems, International Journal of Systems Science, 6(9 ) (1975), 833858.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [3]

    Chen, C. F. and Hsiao, C. H., A Walsh series direct method for solving variational problems. Journal of the Franklin Institute, 300(4 ) (1975), 265280.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [4]

    Chen, C. F. and Hsiao, C. H., Design of piecewise constant gains for optimal control via Walsh functions, IEEE Transactions on Automatic Control, 20(5 ) (1975), 596603.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [5]

    Chen, C. F. and Hsiao, C. H., Time-domain synthesis via walsh functions, in: Proceedings of the Institution of Electrical Engineers, volume 122, pages 565570. IET, 1975.

    • Search Google Scholar
    • Export Citation
  • [6]

    Chen, C. F. and Hsiao, C. H., Walsh series analysis in optimal control, International Journal of Control, 21(6 ) (1975), 881897.

  • [7]

    Chen, W.-L. and Shih, Y.-P., Shift walsh matrix and delay-differential equations, IEEE Transactions on Automatic Control, 23(6 ) (1987), 10231028.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [8]

    Corrington, M., Solution of differential and integral equations with Walsh functions, IEEE Transactions on Circuit Theory, 20(5 ) (1973), 470476.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [9]

    Fine, N. J., On the Walsh functions, Trans. Am. Math. Soc., 65 (1949), 372414.

  • [10]

    GÁT, Gy. and Toledo, T., A numerical method for solving linear differential equations via Walsh functions, in: Advances in Information Science and Applications, volume 2, pages 334339. Proceedings of the 18th International Conference on Computers (part of CSCC 2014), Santorini Island, Greece, July 17- 21, 2014, 2014.

    • Search Google Scholar
    • Export Citation
  • [11]

    GÁT, Gy . and Toledo, R., Estimating the error of the numerical solution of linear differential equations with constant coefficients via Walsh polynomials, Acta Math. Acad. Paedagog. Nyházi. (N.S.), 31(2) (2015), 309330.

    • Search Google Scholar
    • Export Citation
  • [12]

    Gibbs, J. E. and Gebbie, H. A., Application of Walsh Functions to Transform Spectroscopy, Nature, 224 (1969), 10121013.

  • [13]

    Gulamhusein, M. N., Simple matrix-theory proof of the discrete dyadic convolution theorem, Electronics Letters, 9(10 ) (1973), 238239.

  • [14]

    Harmuth, H. F., Transmission of information by orthogonal functions, 2nd ed., Berlin-Heidelberg-New York: Springer-Verlag. XII, 393 p., 1972.

  • [15]

    Lukomskii, D. S., Lukomskii, S. F. and Terekhin, P. A., Solution of Cauchy problem for equation first order via Haar functions, Izv. Saratov Univ. (N.S.), Ser. Math. Mech. Inform., 16 (2016), 151159.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [16]

    Ohta, T., Expansion of Walsh Functions in Terms of Shifted Rademacher Functions and Its Applications to the Signal Processing and the Radiation of Electromagnetic Walsh Waves, IEEE Transactions on Electromagnetic Compatibility, EMC- 18 (1976), 201205.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [17]

    Rao, G. P., Piecewise constant orthogonal functions and their application to systems and control., volume 55, Springer, Cham, 1983.

  • [18]

    Schipp, F., Wade, W. R. and Simon, P., Walsh series. An introduction to dyadic harmonic analysis, Adam Hilger, Bristol and New York, 1990.

    • Search Google Scholar
    • Export Citation
  • [19]

    Shih, Y.-P. and Han, J,-Y., Double walsh series solution of first-order partial differential equations, International Journal of Systems Science, 9(5 ) (1978), 569578.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • [20]

    Stankovic, R. S. and Miller, D. M., Using QMDD in numerical methods for solving linear differential equations via Walsh functions, in: 2015 IEEE International Symposium on Multiple-Valued Logic, pages 182188, May 2015.

    • Search Google Scholar
    • Export Citation
  • [21]

    Walsh, J. L., A closed set of normal orthogonal functions, Am. J. Math., 45 (1923), 524.