IAMINSTITUTE OF APPLIED MATHEMATICS

Enumeration of irreducible polynomials with prescribed coefficients

Emrah Sercan Yılmaz

College Dublin University

Place: IAM S212

Date/Time: 10.01.2017 -15.30

Abstract: In this seminar, we will give the general theory of enumeration of irreducible polynomials with prescribed coefficients from beginning. We will explain how these numbers are related with (fibre products of) supersingular curves.

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Firdevs Ulus

Department of Industrial Engineering
Bilkent University
Invited by: Murat Manguoğlu
Place: IAM-S209

Date / Time: 03.01.2017 / 15.40

Abstract. A parametric simplex algorithm for solving linear vector optimization problems (LVOPs) is presented. This algorithm can be seen s a variant of the multi-objective simplex algorithm. Different from it, the proposed algorithm works in the parameter space and does not aim to find the set of all efficient solutions. Instead, it finds a ‘solution’ which is a subset of efficient solutions that allows to generate the whole efficient frontier. In that sense, it can also be seen as a generalization of the parametric self-dual simplex algorithm, which originally is designed for solving single objective linear ptimization problems, and is modified to solve two objective bounded LVOPs with the positive orthant as the ordering cone. The algorithm proposed here works for any dimension, any solid pointed polyhedral ordering cone and for bounded as well as unbounded problems.Numerical results are provided to compare the proposed algorithm with an objective space based LVOP (Benson's) algorithm and with the multiobjective simplex (the Evans-Steuer) algorithm. The results show that for non-degenerate problems the proposed algorithm outperforms Benson's algorithm and is on par with the Evan-Steuer algorithm. For highly degenerate problems Benson's algorithm outperforms the simplex-type algorithms; however, the parametric simplex algorithm is computationally much more efficient than the Evans-Steuer algorithm for these problems.

Em Algorithm for Markov Chain Observed Via Gaussian Noise and Point Processes Information

Zehra Eksi-Altay

Vienna University of Economics and Business
Invited by: Yeliz Yolcu Okur
Place: IAM-S209

Date / Time: 27.12.2016 / 15.40

Abstract. In this paper we deal with the parameter estimation of a  finite-state Markov chain observed via Gaussian noise and point processes information. To this, we use the Expectation Maximization (EM) algorithm. This amounts to the derivation of finite-dimensional filters for the related quantities. In this context, we obtain both exact and unnormalized filters. Next, we compute discretized robust versions of the unnormalized filters. Moreover, we introduce a novel goodness of fit test to check how well the estimated model explains the given data set. Finally, we run a simulation study to test speed and accuracy of the algorithm. In particular, we provide a comparison for the estimates resulting from the robust and naive discretization and the value of point process information.


Efficient methods to generate cryptographically good binary linear transformations 

Tolga Sakallı
Trakya University
Department of Computer Engineering

Place: IAM-S209

Date / Time: 20.12.2016 / 15.40

Abstract. In this presentation, we propose new methods using a divide-and-conquer strategy to generate $n \times n$ binary matrices (for composite $n$) with a high/maximum branch number and the same Hamming weight in each row and column. We introduce new types of binary matrices, namely $(BHwC)_{t,m}$ and $(BCwC)_{q,m}$ types, which are a combination of Hadamard and circulant matrices, and the recursive use of circulant matrices, respectively. With the help of these hybrid structures, the search space to generate a binary matrix with a high/maximum branch number is drastically reduced. By using the proposed methods, we focus on generating $12 \times 12$, $16 \times 16$ and $32 \times 32$ binary matrices with a maximum or maximum achievable branch number and low implementation costs to be used in block ciphers. Then, we discuss the implementation properties of binary matrices generated and present experimental results for binary matrices in these sizes. Finally, we apply the proposed methods to larger sizes, i.e., $48 \times 48$, $64 \times 64$ and $80 \times 80$ binary matrices having some applications in secure multi-party computation and fully homomorphic encryption.


Protein-Protein Interaction Network’s Data

Vilda Purutçuoğlu

Department of Statistics
METU
Invited by:Gerhard Wilhelm Weber
Place: IAM-S209

Date / Time: 13.12.2016 / 15.40

Abstract. In systems biology, the protein-protein interaction network is one of the common types of network structures. There are a number of different approaches to describe these complex systems under various assumptions. In this talk, we initially present a well-known modelling approach, called the Gaussian graphical model (GGM) to explain the steady-state behavior of the systems. This model basically represents the interactions between proteins via the precision matrix whose entries are defined under the conditional independency of the states. Although GGM and its major inference method, glasso, are successful in the description of the small and moderate systems, it has certain drawbacks such as the strike normality assumption of the model, high computational demand in the estimation, particularly, under high dimensional systems. Hence, here, we suggest several alternatives to overcome these challenges. In order to solve the problem of normality and high computational inefficiency, we propose a new non-parametric model based on the lasso regression. Then, to deal with the restriction of dimension of the systems, we further propose copula GGM and Bayesian inference of the model parameters. Finally, we investigate the possibility to improve the raw data and exclude the batch effect as the pre-processing step before any modelling. We evaluate the performance of all suggested models and underlying normalizing steps via real and simulate datasets.


The Political Economy of Financialization

Ali Tarhan

Independent Researcher Political Economy
Invited by: Bülent Karasözen

Place: IAM-S209

Date / Time: 06.12.2016 / 15.40

Abstract. Financialization is the key operating vehicle of neoliberalism for nearly four decades. With Gerald Epstein’s words it “refers to the increasing importance of financial markets, financial motives, financial institutions, and financial elites in the operations of the economy and its governing institutions, both at the national and international levels.” First two decades of this process constitute the political preparation stage. The latter two decades, marked with the “Gramm-Leach-Bliley act of 1999” portray the advancement phase of financialization. This whole era has different impacts on financialized core countries (The United States and The United Kingdom), and peripheral countries. While financialization has gone hand in hand with deindustrialization in the former group, it has created a financial Dutch disease by decreasing the savings ratio and promoting consumerism with abundance of readily available foreign funds in the latter. Consequently, lenders and borrowers of the global financial system have closely tied with financial threads which eased the spread of financial crises. These intertwined financial interactions of the core and periphery have also established a new network of power relations between these two realms. Experiences after the 2008 crisis show that still unregulated financial industry of the core states, especially of the US, has gained more political power than it had in the pre-crisis epoch. On the other hand, the peripheral states caught by unprepared by the last crisis have lost their financial stabilities. Therefore, the purpose of this seminar is to discuss financialization beginning with its early stages with a special emphasis on center-periphery relations


A quick introduction to Weierstrass points onalgebraic curves

Luciane Quoos

Department of Mathematics, Universidade Federal do Rio de Janeiro, Brasil

Invited by: Ferruh Özbudak

Place: IAM-S209

Date / Time: 29.11.2016 / 15.40

Abstract. We are going to introduce the concept of Weierstrass semigroup at one point in an algebraic curve and its relevance for the study of properties of the curve. For Kummer extensions ym = f(x); f(x) a polynomial, we discuss conditions for an integer be a Weierstrass gap at a point P. For the totally ramified points, the condition will be necessary and suffcient. As a consequence we present an application to maximal curves: we present a class of polynomials f(x) in Fq2 [x] for which the Fq2 -maximality of ym = f(x) implies that m is a divisor of q +1.

Nonlinear integral equations for inverse boundary value problems

Olha Ivanyshyn Yaman

Izmir Institute of Technology Mathematics Department

Invited by: Bülent Karasözen

Place: IAM-S209

Date / Time: 22.11.2016 / 15.40

Abstract. The inverse problems of determining the shape of an obstacle from the knowledge of the far field pattern for scattering of timeharmonic plane waves are considered. Using Green's representation theorem a system of nonlinear boundary integral equations is derived which is equivalent to the inverse problem. Due to nonlinearity and the inherited ill-posedness, the system of integral equations is solved via an iteratively regularized Newton-type method. The feasibility of the methods is illustrated by numerical examples for acoustic and electromagnetic obstacles in two and three dimensions.

An Empirical Model of the International Cost and Equity

Mehmet Uzunkaya

Ministry of Development & METU-IAM
Invited by: Yeliz Yolcu Okur

Place: IAM-S209

Date / Time: 15.11.2016 / 15.40

Abstract. The aim of the study is to propose an empirical model of the international cost of equity by investigating and analyzing the long-run relation between disaggregated country risk ratings and country stock market index returns for a large panel of countries. The study tests the hypothesis that, given the available theoretical and empirical evidence, country risk ratings and country stock market index returns should move together in the long-run and there should be a long-run equilibrium between them; thus country risk ratings, with their forward-looking nature about the political, macroeconomic and financial fundamentals of a large number of countries, may behave as long-run state variables for stock returns to the extent they are undiversifiable internationally. The results of the analysis provide evidence in favor of the argument that disaggregated country risk ratings, in particular the political and economic risk ratings, are related to stock market returns in the long-run. Using this relation, an empirical model of the international cost of equity is proposed. The model takes country risk ratings as inputs and finds the international cost of equity for a specific country of known risk ratings.


Taking Advice in Forecasting: Presumed vs. Experienced Credibility

M. Sinan Gönül

Department of Business Administration

Middle East Technical University

Place: IAM - S209

Date / Time: 08.11.2016 / 15.40

Abstract. During the forecasting activities of organizations, when externally generated predictions are acquired and used as decision making advice, the characteristics of the source providing this external forecasting advice gains paramount importance. Recent research has revealed that cues leading to the perceived credibility of the source can either be attained through experiencing its past performance or built upon the presumptions and/or the reputation about the source. This paper reports the results of three experiments that investigate the effects of these different types of source credibility (experienced and presumed) on judgmental adjustments of external forecasts. The first study examined the effects of experienced credibility via three groups i) a control group – no cues about experienced credibility ii) a low experienced credibility group and iii)a high experienced credibility group presumed credibility. Similarly, the second study examined presumed credibility again via three groups i) a control group – no cues about presumed credibility ii) a low presumed credibility group and iii) a high presumed credibility group. The third study examined the interaction between these two credibility types and utilized a 2x2 design with the factors i) presumed credibility (high vs. low) and ii) experienced credibility (high vs. low). Findings from these studies are discussed and guidance for future research is given.