Premieră în România ! Universitatea Alexandru Ioan Cuza din Iași a lansat la apă o celebră ambracațiune maritime pentru cercetare, în județul Neamț- FOTO

Publicație : Bună Ziua Iași

În Iaşi începe azi admiterea de toamnă la universităţi: Puţine locuri la buget după recordurile din vară

 Începând de astăzi, 7 septembrie 2020, universităţile din Iaşi fac o pauză de la gestionarea multiplelor situaţii pentru începerea noului an universitar şi demarează sesiunea de admitere de toamnă, după rezultate foarte bune, pe linie, în această vară.

În urma centralizării tuturor cererilor de înscriere şi a finalizării perioadei de confirmare a locurilor, Universitatea Tehnică „Gheorghe Asachi” din Iaşi scoate la concurs aproximativ 300 de locuri la buget la licenţă şi circa 400 de locuri la buget la programele de master.

Perioada de înscriere este 7 – 18 septembrie, rezultatele provizorii vor fi publicate până pe 21 septembrie, studenţii trebuie să depună actele în original până pe data de 23 septembrie, iar o zi mai târziu, pe 24 septembrie, vor fi afişate rezultatele finale.

La licenţă, la fel ca în anii precedenţi, atât la Facultatea de Automatică şi Calculatoare, cât şi la Facultatea de Arhitectură „G.M. Cantacuzino”, toate locurile de la buget au fost ocupate din sesiunea de vară.

Studenţii care vor candida pentru locurile puse la dispoziţie de către TUIASI au posibilitatea să se înscrie, cu o singură taxă şi prin depunerea unui singur dosar, la oricâte facultăţi ale Universităţii Tehnice „Gheorghe Asachi” din Iaşi doresc, în ordinea preferinţelor. Ulterior, în funcţie de media de admitere şi de opţiunile alese, aceştia vor fi declaraţi admişi.

La Universitatea „Alexandru Ioan Cuza” din Iaşi vor fi scoase 1.629 de locuri la concurs, 210 la buget şi 1.319 la taxă pentru candidaţii români. La master sunt disponibile 1.041 de locuri, 929 pentru candidaţii români, dintre care 188 la buget.

„O noutate a acestei sesiuni de admitere o reprezintă oportunitatea oferită candidaţilor de a se înscrie la una dintre cele 5 specializări din cadrul masteratului didactic: biologie, educaţie fizică şi sport, fizică, geografie, teologie ortodoxă. Pentru masteratul didactic sunt scoase la concurs 50 de locuri, la buget, fiind organizat la forma de învăţământ cu frecvenţă. Programul se adreseazăabsolvenţilor cu diplomă de licenţă care doresc să se orienteze către cariera didactică. Absolvenţii cu diplomă de licenţă se pot înscrie la masterul didactic din acelaşi domeniu fundamental, corespunzător specializării dobândite prin studiile de licenţă”, au transmis reprezentanţii universităţii.

La Facultatea de Drept (studii de licenţă), Facultatea de Educaţie Fizică şi Sport (studii de licenţă şi master) şi Facultatea de Informatică (studii de licenţă şi master) nu se organizează admitere în această sesiunea, întrucât au fost ocupate toate locurile în sesiunea din iulie.

La Universitatea de Ştiinţe Agricole şi Medicină Veterinară „Ion Ionescu de la Brad”, admiterea se va derula în intervalul 7 – 10 septembrie, vor fi 184 de locuri scoase la licenţă, dintre care doar 62 la buget, 122 fiind la taxă, în timp ce la master sunt 67 de locuri disponibile, dintre care 42 la buget.

La toate universităţile din Iaşi înscrierea este posibilă online, o parte dintre acestea oferind posibilitatea studenţilor să depună şi la faţa locului dosarul de admitere

Publicație : Ziarul de Iași și Evenimentul

 In defence of polytechnics and criminology

The abolition of polytechnics laid the seeds of the current problems facing universities, writes Ramesh Kapadia, while Mark Littler and Peter Quinn defend criminology courses

Prof Stefan Collini (English universities are in peril because of 10 years of calamitous reform, 31 August) presents some powerful arguments about chaotic university reforms. I would make one further point. A previous Conservative government laid the seeds for this debacle 30 years ago when it made polytechnics into universities, believing that this would raise their status. It is clear that this has not happened.

I was proud to be a senior lecturer at the Polytechnic of the South Bank with its four-year sandwich degree in mathematics and computing, enabling less academically qualified students to succeed and get jobs. I also ran a postgraduate course in maths education for students whose first degree fell short of expectations. Many students with third-class degrees went on to senior educational posts. I also had the privilege of teaching mature students in their 80s.

Polytechnics were set up to be different, with a stronger focus on teaching. They were local, serving their communities effectively and offering more vocationally oriented courses, though applied research was also encouraged. This alternative was sadly abolished by the stroke of a pen in 1992.
Prof Ramesh Kapadia
Surbiton, London

  • I hope I have misunderstood Prof Paddy Hillyard’s comments (Letters, 2 September) criticising criminology’s expansion. If not, they exemplify what is wrong with the thinking of Britain’s social science establishment. While it is true that student numbers in criminology have surged, this is neither the result of disciplinary “imperialism” nor a cause for concern. Perhaps it simply reflects that what and how criminology is taught is a marketable package that gives students what they want.

Rather than decrying our success, those in less happy fields might wish to reflect on lessons they can learn. If students are less concerned with the more “significant” fields of inquiry, perhaps it’s because they do a poorer job of explaining why they matter.
Mark Littler
University of Huddersfield

  • The sociologist Prof Hillyard writes disparagingly about criminology degrees. To add balance, when I read criminology at Cambridge in the 1960s, the university was riven with debate as to whether sociology was an academic discipline at all or merely a borrowing from several others.

Publicație : The Guardian

From viral conspiracies to exam fiascos, algorithms come with serious side effects

Will Thursday 13 August 2020 be remembered as a pivotal moment in democracy’s relationship with digital technology? Because of the coronavirus outbreak, A-level and GCSE examinations had to be cancelled, leaving education authorities with a choice: give the kids the grades that had been predicted by their teachers, or use an algorithm. They went with the latter.

The outcome was that more than one-third of results in England (35.6%) were downgraded by one grade from the mark issued by teachers. This meant that a lot of pupils didn’t get the grades they needed to get to their university of choice. More ominously, the proportion of private-school students receiving A and A* was more than twice as high as the proportion of students at comprehensive schools, underscoring the gross inequality in the British education system.

What happened next was predictable but significant. A lot of teenagers, realising that their life chances had just been screwed by a piece of computer code, took to the streets. “Fuck the algorithm” became a popular slogan. And, in due course, the government caved in and reversed the results – though not before a lot of emotional distress and administrative chaos had been caused. And then Boris Johnson blamed the fiasco on “a mutant algorithm” which, true to form, was a lie. No mutation was involved. The algorithm did what it said on the tin. The only mutation was in the behaviour of the humans affected by its calculations: they revolted against what it did.

And that was a genuine first – the only time I can recall when an algorithmic decision had been challenged in public protests that were powerful enough to prompt a government climbdown. In a world increasingly – and invisibly – regulated by computer code, this uprising might look like a promising precedent. But there are several good reasons, alas, for believing that it might instead be a blip. The nature of algorithms is changing, for one thing; their penetration into everyday life has deepened; and whereas the Ofqual algorithm’s grades affected the life chances of an entire generation of young people, the impact of the dominant algorithms in our unregulated future will be felt by isolated individuals in private, making collective responses less likely.

According to the Shorter Oxford Dictionary, the word “algorithm” – meaning “a procedure or set of rules for calculation or problem-solving, now esp with a computer” – dates from the early 19th century, but it’s only comparatively recently that it has penetrated everyday discourse. Programming is basically a process of creating new algorithms or adapting existing ones. The title of the first volume, published in 1968, of Donald Knuth’s magisterial five-volume The Art of Computer Programming, for example, is “Fundamental Algorithms”. So in one way the increasing prevalence of algorithms nowadays simply reflects the ubiquity of computers in our daily lives, especially given that anyone who carries a smartphone is also carrying a small computer.

The Ofqual algorithm that caused the exams furore was a classic example of the genre, in that it was deterministic and intelligible. It was a program designed to do a specific task: to calculate standardised grades for pupils based on information a) from teachers and b) about schools in the absence of actual examination results. It was deterministic in the sense that it did only one thing, and the logic that it implemented – and the kinds of output it would produce – could be understood and predicted by any competent technical expert who was allowed to inspect the code. (In that context, it’s interesting that the Royal Statistical Society offered to help with the algorithm but withdrew because it regarded the non-disclosure agreement it would have had to sign as unduly restrictive.)

Advertisement

Classic algorithms are still everywhere in commerce and government (there’s one currently causing grief for Boris Johnson because it’s recommending allowing more new housing development in Tory constituencies than Labour ones). But they are no longer where the action is.

Since the early 1990s – and the rise of the web in particular – computer scientists (and their employers) have become obsessed with a new genre of algorithms that enable machines to learn from data. The growth of the internet – and the intensive surveillance of users that became an integral part of its dominant business model – started to produce torrents of behavioural data that could be used to train these new kinds of algorithm. Thus was born machine-learning (ML) technology, often referred to as “AI”, though this is misleading – ML is basically ingenious algorithms plus big data.

Machine-learning systems are ‘uninterpretable’. Which should, in principle, limit their domains of application

Machine-learning algorithms are radically different from their classical forebears. The latter take some input and some logic specified by the programmer and then process the input to produce the output. ML algorithms do not depend on rules defined by human programmers. Instead, they process data in raw form – for example text, emails, documents, social media content, images, voice and video. And instead of being programmed to perform a particular task they are programmed to learn to perform the task. More often than not, the task is to make a prediction or to classify something.

This has the implication that ML systems can produce outputs that their creators could not have envisaged. Which in turn means that they are “uninterpretable” – their effectiveness is limited by the machines’ current inability to explain their decisions and actions to human users. They are therefore unsuitable if the need is to understand relationships or causality; they mostly work well where one only needs predictions. Which should, in principle, limit their domains of application – though at the moment, scandalously, it doesn’t.

Machine-learning is the tech sensation du jour and the tech giants are deploying it in all their operations. When the Google boss, Sundar Pichai, declares that Google plans to have “AI everywhere”, what he means is “ML everywhere”. For corporations like his, the attractions of the technology are many and varied. After all, in the past decade, machine learning has enabled self-driving cars, practical speech recognition, more powerful web search, even an improved understanding of the human genome. And lots more.

Because of its ability to make predictions based on observations of past behaviour, ML technology is already so pervasive that most of us encounter it dozens of times a day without realising it. When Netflix or Amazon tell you about interesting movies or goods, that’s ML being deployed as a “recommendation engine”. When Google suggests other search terms you might consider, or Gmail suggests how the sentence you’re composing might end, that’s ML at work. When you find unexpected but possibly interesting posts in your Facebook newsfeed, they’re there because the ML algorithm that “curates” the feed has learned about your preferences and interests. Likewise for your Twitter feed. When you suddenly wonder how you’ve managed to spend half an hour scrolling through your Instagram feed, the reason may be that the ML algorithm that curates it knows the kinds of images that grab you.

As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths

The tech companies extol these services as unqualified public goods. What could possibly be wrong with a technology that learns what its users want and provides it? And at no charge? Quite a lot, as it happens. Take recommendation engines. When you watch a YouTube video you see a list of other videos that might interest you down the right-hand side of the screen. That list has been curated by a machine-learning algorithm that has learned what has interested you in the past, and also knows how long you’ve spent during those previous viewings (using time spent as a proxy for level of interest). Nobody outside YouTube knows exactly what criteria the algorithm is using to choose recommended videos, but because it’s basically an advertising company, one criterion will definitely be: “maximise the amount of time a viewer spends on the site”.

In recent years there has been much debate about the impact of such a maximisation strategy. In particular, does it push certain kinds of user towards increasingly extremist content? The answer seems to be that it can. “What we are witnessing,” says Zeynep Tufekci, a prominent internet scholar, “is the computational exploitation of a natural human desire: to look ‘behind the curtain’, to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

What we have also discovered since 2016 is that the micro-targeting enabled by ML algorithms deployed by social media companies has weakened or undermined some of the institutions on which a functioning democracy depends. It has, for example, produced a polluted public sphere in which mis- and disinformation compete with more accurate news. And it has created digital echo-chambers and led people to viral conspiracy theories such as Qanon and malicious content orchestrated by foreign powers and domestic ideologues.

The side-effects of machine-learning within the walled gardens of online platforms are problematic enough, but they become positively pathological when the technology is used in the offline world by companies, government, local authorities, police forces, health services and other public bodies to make decisions that affect the lives of citizens. Who should get what universal benefits? Whose insurance premiums should be heavily weighted? Who should be denied entry to the UK? Whose hip or cancer operation should be fast-tracked? Who should get a loan or a mortgage? Who should be stopped and searched? Whose children should get a place in which primary school? Who should get bail or parole, and who should be denied them? The list of such decisions for which machine-learning solutions are now routinely touted is endless. And the rationale is always the same: more efficient and prompt service; judgments by impartial algorithms rather than prejudiced, tired or fallible humans; value for money in the public sector; and so on.

The overriding problem with this rosy tech “solutionism” is the inescapable, intrinsic flaws of the technology. The way its judgments reflect the biases in the data-sets on which ML systems are trained, for example – which can make the technology an amplifier of inequality, racism or poverty. And on top of that there’s its radical inexplicability. If a conventional old-style algorithm denies you a bank loan, its reasoning can be explained by examination of the rules embodied in its computer code. But when a machine-learning algorithm makes a decision, the logic behind its reasoning can be impenetrable, even to the programmer who built the system. So by incorporating ML into our public governance we are effectively laying the foundations of what the legal scholar Frank Pasquale warned against in his 2016 book The Black Box Society.

In theory, the EU’s General Data Protection Regulation (GDPR) gives people a right to be given an explanation for an output of an algorithm – though some legal experts are dubious about the practical usefulness of such a “right”. Even if it did turn out to be useful, though, the bottom line is that injustices inflicted by a ML system will be experienced by individuals rather than by communities. The one thing machine learning does well is “personalisation”. This means that public protests against the personalised inhumanity of the technology are much less likely – which is why last month’s demonstrations against the output of the Ofqual algorithm could be a one-off.

In the end the question we have to ask is: why is the Gadarene rush of the tech industry (and its boosters within government) to deploy machine-learning technology – and particularly its facial-recognition capabilities – not a major public policy issue?

The explanation is that for several decades ruling elites in liberal democracies have been mesmerised by what one can only call “tech exceptionalism” – ie the idea that the companies that dominate the industry are somehow different from older kinds of monopolies, and should therefore be exempt from the critical scrutiny that consolidated corporate power would normally attract.

The only consolation is that recent developments in the US and the EU suggest that perhaps this hypnotic regulatory trance may be coming to an end. To hasten our recovery, therefore, a thought experiment might be helpful.

Imagine what it would be like if we gave the pharmaceutical industry the leeway that we currently grant to tech companies. Any smart biochemist working for, say, AstraZeneca, could come up with a strikingly interesting new molecule for, say, curing Alzheimer’s. She would then run it past her boss, present the dramatic results of preliminary experiments to a lab seminar after which the company would put it on the market. You only have to think of the Thalidomide scandal to realise why we don’t allow that kind of thing. Yet it is exactly what the tech companies are able to do with algorithms that turn out to have serious downsides for society.

What that analogy suggests is that we are still at the stage with tech companies that societies were in the era of patent medicines and snake oil. Or, to put it in a historical frame, we are somewhere between 1906, when the Pure Food and Drug Act was passed by the US Congress, and 1938, the year Congress passed the Federal Food, Drug, and Cosmetic Act, which required that new drugs show safety before selling. Isn’t it time we got a move on?

John Naughton chairs the advisory board of the new Minderoo Centre for Technology and Democracy at the University of Cambridge

Publicație : The Guardian

Dans les bibliothèques universitaires, la guerre des places est annoncée

Pour respecter les consignes sanitaires, les universités vont devoir réduire drastiquement le nombre de sièges disponibles dans les « BU ». A la Sorbonne-Nouvelle, l’accès sera fonction de la date de naissance, jour pair ou impair.

Comment va-t-on pousser les murs ? C’est la question que se posent les responsables des bibliothèques universitaires à la veille de la rentrée. Mardi 1er septembre, celles de la Sorbonne-Nouvelle (Paris-III) ont rouvert leurs portes. En temps normal, 900 places assises en bibliothèque sont proposées aux 17 000 étudiants de l’université. Mais cette année, pour respecter les consignes sanitaires prévues dans la dernière circulaire ministérielle, ce sera beaucoup moins.

« Nous avons dû condamner une place sur deux », explique Clémence Joste, responsable des services au public de Paris-III. La moitié des chaises sont interdites à l’usage par un système d’autocollants, et une distance latérale d’un mètre est maintenue pour celles qui demeurent libres à l’usage, malgré le port obligatoire du masque.

Au niveau national, l’application de la consigne de distanciation provoque une réduction de « 30 à 50 % des places en bibliothèques universitairesalors que c’est l’espace le plus fréquenté de nos établissements », constate Sandrine Gropp, vice-présidente de l’Association des directeurs et personnels de direction des bibliothèques universitaires (ADBU).

Alors que le nombre d’étudiants, lui, ne fléchit pas, chaque établissement va devoir inventer un système pour gérer les flux. « Nous pouvons craindre que chaque université ait à faire un choix sur le type de public qui sera prioritaire, s’inquiète Sandrine Gropp. Ou bien les primo-entrants sont favorisés, car ils ont besoin d’un temps pour s’acclimater à l’université, ou bien on privilégie les étudiants en master et les doctorants, car ils ont un important volet recherche dans leurs études et qu’ils ont particulièrement besoin de documentation. » Mais il n’y aura pas de place pour tout le monde, ou pas tous les jours.

Jours pairs, jours impairs

Pour ne pas avoir à choisir entre les nouveaux étudiants et leurs aînés, la Sorbonne-Nouvelle a fait un autre arbitrage : limiter l’accès à l’université en fonction de la date de naissance de chacun. Les étudiants nés les jours pairs auront accès aux locaux de l’université (dont les bibliothèques) les semaines paires, ceux nés un jour impair auront le droit de s’y rendre les semaines impaires. Un moyen qui doit permettre de mieux respecter la distanciation et les sens de circulation imposés par les règles sanitaires.

Publicație : Le Monde