Subjects -> SCIENCES: COMPREHENSIVE WORKS (Total: 374 journals)
Similar Journals
 The WinnowerNumber of Followers: 0     Open Access journal ISSN (Online) 2373-146X Published by The Winnower  [1 journal]
• I'm Steve Bernard, Senior Visual Journalist at the Financial Times. AMA on
how we mapped the war in Ukraine

• Authors: (financialtimes
Abstract: PROOF: https://i.redd.it/deju9epm14ka1.jpgHi there, I work in the graphics department at the Financial Times website/newspaper where I have worked for over 26 years and have seen many changes in this industry over my career. The main focus of this AMA is how we have used maps during the Russia/Ukraine conflict. But you can ask me anything you want with regards to visualising data on maps, telling stories with maps, processes, software,or how I got into data visualisation.Russia’s invasion of Ukraine in maps — latest updatesKherson counter-offensiveData visualisation: how the FT newsroom designs mapsSand castles on Jersey shore: property boom defies US flood riskSuez blockage animationHow to create an animated smoke mapSmall multiple maps showing California's 22 years of dealing with droughtAnimation showing shipments of Russian fossil fuels to Europe since the invasion of UkraineAnimation showing civilian and military targets in Ukraine since the beginning of the Russian invasion3D animation of China’s nitrogen dioxide pollution levels since 2005 Read and Review the full paper at TheWinnower.com
PubDate: Fri, 24 Feb 2023 10:50:25 -050

• We're The Economist's data team. Ask us anything!

• Authors: (theeconomist
Abstract: Hi everyone. We're The Economist's data team. We gather, analyse and visualise data for The Economist and produce data-driven journalism. Over the past year we've created many coronavirus trackers, a risk estimator and most recently an excess-mortality model, and we've seen the interest in our work skyrocket. We can answer questions about anything relating to data journalism at The Economist. All of our work can be found on the website here or you can follow us on Twitter for updates. For more exclusive insights, sign up for our free weekly newsletter.Proof: https://twitter.com/ECONdailycharts/status/1394666569599438851's=20 Read and Review the full paper at TheWinnower.com
PubDate: Thu, 20 May 2021 12:50:44 -040

• ModelingToolkit, Modelica, and Modia: The Composable Modeling Future in
Julia

• Authors: me@chrisrackauckas.com (Christopher Rackauckas
PubDate: Tue, 18 May 2021 07:57:16 -040

• GPU-Accelerated ODE Solving in R with Julia, the Language of Libraries

• Authors: me@chrisrackauckas.com (Christopher Rackauckas
Abstract: R is a widely used language for data science, but due to performance most of its underlying library are written in C, C++, or Fortran. Julia is a relative newcomer to the field which has busted out since its 1.0 to become one of the top 20 most used languages due to its high performance libraries for scientific computing and machine learning. Julia's value proposition has been its high performance in high level language, known as solving the two language problem, which has allowed allowed the language to build a robust, mature, and expansive package ecosystem. While this has been a major strength for package developers, the fact remains that there are still large and robust communities in other high level languages like R and Python. Instead of spawning distracting language wars, we should ask the question: can Julia become a language of libraries to accelerate these other languages as well'This is definitely not the first time this question was asked. The statistics libraries in Julia were developed by individuals like Douglas Bates who built some of R's most widely used packages like lme4 and Matrix. Doug had written a blog post in 2018 showing how to get top notch performance in linear mixed e\nffects model fitting via JuliaCall. In 2018 the JuliaDiffEq organization had written a blog post demonstrating the use of DifferentialEquations.jl in R and Python\n (the Jupyter of Diffrential Equations). Now rebranded as SciML for Scientific Machine Learning, we looked to expand our mission and bring automated model discovery and acceleration include other languages like R and Python with Julia as the base.With the release of diffeqr v1.0, we can now demonstrate many advances in R through the connection to Julia. Specifically, I would like to use this blog post to showcase:
The new direct wrapping interface of diffeqr
JIT compilation and symbolic analysis of ODEs and SDEs in R using Julia and ModelingToolkit.jl
GPU-accelerated simulations of ensembles using Julia's DiffEqGPU.jlTogether we will demonstrate how models in R can be accelerated by 1000x without a user ever having to write anything but R.A Quick Note Before ContinuingBefore continuing on with showing all of the features, I wanted to ask for support so that we can continue developing these bridged libraries. Specifically, I would like to be able to support developers interested in providing a fully automated Julia installation and static compilation so that calling into Julia libraries is just as easy as any Rcpp library. To show support, the easiest thing to do is to star our libraries. The work of this blog post is build on DifferentialEquations.jl, diffeqr, ModelingToolkit.jl, and DiffEqGPU.jl. Thank you for your patience and now back to our regularly scheduled program.diffeqr v1.0: Direct wrappers for Differential Equation Solving in RFirst let me start with the new direct wrappers of differential equations solvers in R. In the previous iterations of diffeqr, we had relied on specifically designed high… Read and Review the full paper at TheWinnower.com
PubDate: Tue, 18 May 2021 07:52:19 -040

• Generalizing Automatic Differentiation to Automatic Sparsity, Uncertainty,
Stability, and Parallelism

• Authors: me@chrisrackauckas.com (Christopher Rackauckas
Abstract: Automatic differentiation is a "compiler trick" whereby a code that calculates f(x) is transformed into a code that calculates f'(x). This trick and its two forms, forward and reverse mode automatic differentiation, have become the pervasive backbone behind all of the machine learning libraries. If you ask what PyTorch or Flux.jl is doing that's special, the answer is really that it's doing automatic differentiation over some functions.What I want to dig into in this blog post is a simple question: what is the trick behind automatic differentiation, why is it always differentiation, and are there other mathematical problems we can be focusing this trick towards? While very technical discussions on this can be found in our recent paper titled "ModelingToolkit: A Composable Graph Transformation System \nFor Equation-Based Modeling" and descriptions of methods like intrusive uncertainty quantification, I want to give a high-level overview that really describes some of the intuition behind the technical thoughts. Let's dive in!What is the trick behind automatic differentiation? Non-standard interpretationTo understand automatic differentiation in practice, you need to understand that it's at its core a code transformation process. While mathematically it comes down to being about Jacobian-vector products and Jacobian-transpose-vector products for forward and reverse mode respectively, I think sometimes that mathematical treatment glosses over the practical point that it's really about code.Take for example . If we want to take the derivative of this, then we could do , but this misses the information that we actually know analytically how to define the derivative! Using the principle that algorithm efficiency comes from problem information, we can improve this process by directly embedding that analytical solution into our process. So we come to the first principle of automatic differentiation:If you know the analytical solution to the derivative, then replace the function with its derivativeSo if you see  and someone calls ``derivative(f,x)``, you can do a quick little lookup to a table of rules, known as primitives, and if it's in your table then boom you're done. Swap it in, call it a day.This already shows you that, with automatic differentiation, we cannot think of  as just a function, just a thing that takes in values, but we have to know something about what it means semantically. We have to look at it and identify "this is sin" in order to know "replace it with cos". This is the fundamental limitation of automatic differentiation: it has to know something about your code, more information than it takes to call or run your code. This is why many automatic differentiation libraries are tied to specific implementations of underlying numerical primitives. PyTorch understands ``torch.sin`` as , but it does not understand ``tf.sin`` as , which is why if you place a TensorFlow function into a PyTorch training loop you will get an error thrown about the derivative calculation. This semantic mapping is the reason for libraries lik...
PubDate: Tue, 18 May 2021 07:51:46 -040

• We’re Allison Mccartney and Brittany Harris, data reporters and
engineers on the Bloomberg News Graphics team. We worked on the 2016 and
2018 election cycles, and have been focused for the past year (at least!)
on our data-driven coverage of the 2020 U.S. election. Ask Us Anything!

• Authors: (bloomberg
PubDate: Thu, 05 Nov 2020 08:50:54 -050

• How Inexact Models Can Guide Decision Making in Quantitative Systems
Pharmacology

• Authors: me@chrisrackauckas.com (Christopher Rackauckas
Abstract: Pre-clinical Quantitiative Systems Pharmacology (QSP) is about trying to understand how a drug target effects an outcome. If I effect this part of the biological pathways, how will it induce toxicity' Will it be effective'Recently I have been pulling in a lot of technical collegues to help with the development of next generation QSP tooling. Without a background in biological modeling, I found it difficult to explain the "how" and "why" of pharmacological modeling. Why is it differential equations, and where do these "massively expensive global optimization" runs come from' What kinds of problems can you solve with such models when you know that they are only approximate'To solve these questions, I took a step back and tried to explain a decision making scenario with a simple model, to showcase how playing with a model can allow one to distinguish between intervention strategies and uncover a way forward. This is my attempt. Instead of talking about something small and foreign like chemical reaction concentrations, let's talk about something mathematically equivalent that's easy to visualize: ecological intervention.Basic Modeling and FittingLet's take everyone's favorite ecology model: the Lotka-Volterra model. The model is the following:
Left alone, the rabbit population will grow exponentially
Rabbits are eaten wolves in proportion to the number of wolves (number of mouthes to feed), and in proportion to the number of rabbits (ease of food access: you eat more at a buffet!)
Wolf populations grow exponentially, as long as there is a proportional amount of food around (rabbits)
Wolves die overtime of old age, and any generation dies at a similar age (no major wolf medical discoveries)The model is then the ODE:using OrdinaryDiffEq, Plotsfunction f(du,u,p,t) du[1] = dx = p[1]*u[1] - p[2]*u[1]*u[2] du[2] = dy = -p[3]*u[2] + p[4]*u[1]*u[2]endu0 = [1.0;1.0]tspan = (0.0,10.0)p = [1.5,1.0,3.0,1.0]prob = ODEProblem(f,u0,tspan,p)sol = solve(prob,Tsit5())plot(sol,label=["Rabbits" "Wolves"])Except, me showing you that picture glossed over a major detail that every piece of the model is only mechanistic, but also contains a parameter. For example, rabbits grow exponentially, but what's the growth rate' To make that plot I chose a value for that growth rate (1.5), but in reality we need to get that from data since the results can be wildly different:p = [0.1,1.0,3.0,1.0]prob = ODEProblem(f,u0,tspan,p)sol = solve(prob,Tsit5())plot(sol)Here the exponential growth rate of rabbits too low to sustain a wolf population, so the wolf population dies out, but then this makes the rabbits have no predators and grow exponentially, which is a common route of ecological collapse as then they will destroy the local ecosystem. More on that later.Data and Model IssuesBut okay, we need parameters from data, but no single data source is great. One gives us a noisy sample of the population yearly, another every month for the first two years and only on the wolves, etc.:function f_true(du,u,p,t) du[1] = dx = p[1]*u[1] -… Read and Review the full paper at TheWinnower.com
PubDate: Tue, 24 Mar 2020 16:07:54 -040

• Hey everybody, I'm Tom Smith from the Office for National Statistics’
Data Science Campus. We’re using data to help the UK improve people’s

• Authors: (ONS_UK
Abstract: Hi Reddit, I’m Tom Smith, MD for the UK’s Data Science Campus as part of the Office for National Statistics. I have 20 years’ experience using data and analysis to improve public services and am a life-long data addict. I have a PhD in computational neuroscience and robotics, an MSc in knowledge-based systems and an MA in theoretical physics. I'm currently Chair of the Advisory Board to the United Nations Global Platform for big data & official statistics, Member of Council for the UK Royal Statistical Society, and previously chair of the Environment Agency Data Advisory Group, vice-chair of the Royal Statistical Society Official Statistics section, and a member of the Open Data User Group ministerial advisory group to Cabinet Office.Since the Campus was founded in 2017 we have been working on a huge range of projects including:- using tax returns, ship tracking data and road traffic sensor data to allow early identification of large economic changes;- exploring what internet traffic peaks and troughs can tell us about our lives; - using satellite imagery to detect surface water and assess changes over time, for rapid detection of emerging issues;- launching a hub focused on data science and AI for International Development, located at the Department for International Development (DfID), near Glasgow.- supporting ONS, government and public sector organisations to increase their data science capability. We’re aiming to have 500 trained data science practitioners for UK government by 2021.I'll be here to talk about statistics, data and making the world a better place from 3-5pm GMT today. Proof: https://twitter.com/ONSfocus/status/1237060713140625416Ask me anything! Read and Review the full paper at TheWinnower.com
PubDate: Mon, 16 Mar 2020 06:50:40 -040

• We are survey methodologists, and we’re here to answer all your
nerdy data questions.

• Authors: (AAPOR
Abstract: We’re Jessica Holzberg and Ashley Amaya, both survey research methodologists based in Washington, D.C. Questions abound regarding the value and reliability of survey research, including federal data, and we want to share how we work to uncover insights that impact the lives of everyday Americans. Public opinion research is essential to a healthy democracy and provides information that is crucial to informed policymaking. This research gives voice to the nation’s beliefs, attitudes and desires. Ask us how!We believe in transparency and in ethical survey practices. We also believe some practices are not at all above board. You can ask us about those, too.I’m Jessica, and I am the associate communications chair for the American Association for Public Opinion Research (AAPOR). I use both qualitative and quantitative research methods such as cognitive interviewing, focus groups, web probing and experiments to reduce survey measurement error and improve the clarity of communication around surveys. I particularly like talking about the burden of surveys for respondents, measurement of sexual orientation and gender identity, and issues surrounding privacy and confidentiality.I’m Ashley, and I am a senior research survey methodologist at RTI International. I am also the Editor-in-chief of Survey Practice, an assistant research professor at University of Maryland and University of Mannheim, and a member of AAPOR’s Standards Definitions and Policy Impact Award Committees. I focus on the big picture of any design to make sure that all components (e.g., sampling, data collection modes, questionnaires, analysis) form a cohesive design. I also like talking about alternative sources of data (e.g., administrative records, digital trace data) that can enhance or replace survey data. Proof:
https://i.redd.it/2flepy2kfhz31.jpg
PubDate: Wed, 20 Nov 2019 12:50:37 -050

• We’re survey research methodologists based in Washington, D.C. Questions
abound regarding the value and reliability of survey research, including
federal data, and we want to share how we work to uncover insights that
impact the lives of everyday Americans. AMA!

• Authors: (PHealthy
Abstract: We’re Jessica Holzberg and Ashley Amaya, both survey research methodologists based in Washington, D.C. Questions abound regarding the value and reliability of survey research, including federal data, and we want to share how we work to uncover insights that impact the lives of everyday Americans. Public opinion research is essential to a healthy democracy and provides information that is crucial to informed policymaking. This research gives voice to the nation’s beliefs, attitudes and desires. Ask us how!We believe in transparency and in ethical survey practices. We also believe some practices are not at all above board. You can ask us about those, too.I’m Jessica, and I am the associate communications chair for the American Association for Public Opinion Research (AAPOR). I use both qualitative and quantitative research methods such as cognitive interviewing, focus groups, web probing and experiments to reduce survey measurement error and improve the clarity of communication around surveys. I particularly like talking about the burden of surveys for respondents, measurement of sexual orientation and gender identity, and issues surrounding privacy and confidentiality.I’m Ashley, and I am a senior research survey methodologist at RTI International. I am also the Editor-in-chief of Survey Practice, an assistant research professor at University of Maryland and University of Mannheim, and a member of AAPOR’s Standards Definitions and Policy Impact Award Committees. I focus on the big picture of any design to make sure that all components (e.g., sampling, data collection modes, questionnaires, analysis) form a cohesive design. I also like talking about alternative sources of data (e.g., administrative records, digital trace data) that can enhance or replace survey data.We will begin answering questions at 1pm EST. Ask Us Anything! Read and Review the full paper at TheWinnower.com
PubDate: Wed, 20 Nov 2019 11:51:09 -050

• Science Discussion Series: What should and shouldn't be done with your
personal genetic data' Who should benefit' We are researchers and
advocates who are working on new models for DNA research. Let's discuss!

• Authors: (ScienceModerator
PubDate: Thu, 03 Oct 2019 09:50:54 -040

• Science Discussion Series: Climate Change is in the news so let’s talk
about it! We’re experts in climate science and science communication,
let’s discuss!

• Authors: (ScienceModerator
Abstract: Hi reddit! This month the UN is holding its Climate Action Summit, it is New York City's Climate Week next week, today is the Global Climate Strike, earlier this month was the Asia Pacific Climate Week, and there are many more local events happening. Since climate change is in the news a lot let’s talk about it!We're a panel of experts who study and communicate about climate change's causes, impacts, and solutions, and we're here to answer your questions about it! Is there something about the science of climate change you never felt you fully understood' Questions about a claim you saw online or on the news' Want to better understand why you should care and how it will impact you' Or do you just need tips for talking to your family about climate change at Thanksgiving this year' We can help!Here are some general resources for you to explore and learn about the climate:
AAAS just released a report with case studies and videos of how communities and companies (and individuals) in the US are working with scientists to respond to climate change called "How We Respond."
NASA: Vital Signs of the Planet
National Academies of Sciences: Climate Change Evidence and Causes
PubDate: Fri, 20 Sep 2019 09:51:14 -040

• Science discussion series: Small-scale mining provides a huge portion of
the world’s minerals and metals, but has major effects on health and the
environment. We are a team of scientists focused on finding solutions to
these problems, let’s discuss!

• Authors: (ScienceModerator
PubDate: Fri, 30 Aug 2019 08:50:46 -040

• The Essential Tools of Scientific Machine Learning (Scientific ML)

• Authors: me@chrisrackauckas.com (Christopher Rackauckas
Abstract: Scientific machine learning is a burgeoning discipline which blends scientific computing and machine learning. Traditionally, scientific computing focuses on large-scale mechanistic models, usually differential equations, that are derived from scientific laws that simplified and explained phenomena. On the other hand, machine learning focuses on developing non-mechanistic data-driven models which require minimal knowledge and prior assumptions. The two sides have their pros and cons: differential equation models are great at extrapolating, the terms are explainable, and they can be fit with small data and few parameters. Machine learning models on the other hand require "big data" and lots of parameters but are not biased by the scientists ability to correctly identify valid laws and assumptions.However, the recent trend has been to merge the two disciplines, allowing explainable models that are data-driven, require less data than traditional machine learning, and utilize the knowledge encapsulated in centuries of scientific literature. The promise is to fuse a priori domain knowledge which doesn't fit into a "dataset", allow this knowledge to specify a general structure that prevents overfitting, reduces the number of parameters, and promotes extrapolatability, while still utilizing machine learning techniques to learn specific unknown terms in the model. This has started to be used for outcomes like automated hypothesis generation and accelerated scientific simulation.The purpose of this blog post is to introduce the reader to the tools of scientific machine learning, identify how they come together, and showcase the existing open source tools which can help one get started. We will be focusing on differentiable programming frameworks in the major languages for scientific machine learning: C++, Fortran, Julia, MATLAB, Python, and R.We will be comparing two important aspects: efficiency and composability. Efficiency will be taken in the context of scientific machine learning: by now most tools are well-optimized for the giant neural networks found in traditional machine learning, but, as will be discussed here, that does not necessarily make them efficient when deployed inside of differential equation solvers or when mixed with probabilistic programming tools. Additionally, composability is a key aspect of scientific machine learning since our toolkit is not ML in isolation. Our goal is not to do machine learning as seen in a machine learning conference (classification, NLP, etc.), and it's not to do traditional machine learning as applied to scientific data. Instead, we are putting ML models and techniques into the heart of scientific simulation tools to accelerate and enhance them. Our neural networks need to fully integrate with tools that simulate satellites and robotics simulators. They need to integrate with the packages that we use in our scientific work for verifying numerical accuracy, tracking units, estimating uncertainty, and much more. We need our neural networks to play nicely with existing packages for delay differential equations or reconstruction of dynamical systems. Otherwise we need to write the entire toolchain from scratch! While writing a neural network framework may be a good undergraduate project with modern tools, writing a neural network framework plus adaptive stiff differential equation… Read and Review the full paper at TheWinnower.com
PubDate: Tue, 20 Aug 2019 10:51:06 -040

• We're The Washington Post data journalists and finished a comprehensive
project tracking the opioid crisis in America. AMA.

• Authors: (washingtonpost
Abstract: Hello r/dataisbeautiful! We are Steven Rich, Aaron Williams and Andrew Ba Tran of The Washington Post’s data and design team!We've compiled a comprehensive database on the sale of pain pills which fueled the opioid epidemic. The Post team sifted through almost 380 million transactions from 2006 through 2012 in the Drug Enforcement Administration’s database and made the data available at state and county levels to help the public understand the national crisisWe're here to talk about the methodology, tracking, how they’ve seen people use their data, and how you can too! Want to take a peek at the data' Here’s how to do it. “The Opioid Files” is an investigative effort to analyze an epidemic that’s claimed the lives of more than 200,000 people since 1996. All of our past coverage can be found here. We start at 1 p.m. Looking forward to answering your questions, and special thanks to the mods for inviting us here! Read and Review the full paper at TheWinnower.com
PubDate: Fri, 16 Aug 2019 11:51:02 -040

• Science Discussion Series: We're scientists from Vanderbilt studying how
microbes relate to gut health and what this research means for risk of
disease and developing new treatments. Let’s discuss!

• Authors: (ScienceModerator
PubDate: Mon, 22 Jul 2019 08:50:44 -040

• Neural Jump SDEs (Jump Diffusions) and Neural PDEs

• Authors: me@chrisrackauckas.com (Christopher Rackauckas
Abstract: This is just an exploration of some new neural models I decided to jot down for safe keeping. DiffEqFlux.jl gives you the differentiable programming tools to allow you to use any DifferentialEquations.jl problem type (DEProblem) mixed with neural networks. We demonstrated this before, not just with neural ordinary differential equations, but also with things like neural stochastic differential equations and neural delay differential equations.At the time we made DiffEqFlux, we were the "first to the gate" for many of these differential equations types and left it as an open question for people to find a use for these tools. And judging by the Arxiv papers that went out days after NeurIPS submissions were due, it looks like people now have justified some machine learning use cases for them. There were two separate papers on neural stochastic differential equations, showing them to be the limit of deep latent Gaussian models. Thus when you stick these new mathematical results on our existing adaptive high order GPU-accelerated neural SDE solvers, you get some very interesting and fast ways to learn some of the most cutting edge machine learning methods.So I wanted to help you guys out with staying one step ahead of the trend by going to the next differential equations. One of the interesting NeurIPS-timed Arxiv papers was on jump ODEs. Following the DiffEqFlux.jl spirit, you can just follow the DifferentialEquations.jl tutorials on these problems, implement them, add a neural network, and it will differentiate through them. So let's take it one step further and show an example of how you'd do that. I wanted to take a look at jump diffusions, or jump stochastic differential equations, which are exactly what they sound like. They are a mixture of these two methods. After that, I wanted to show how using some methods for stiff differential equations plus a method of lines discretization gives a way to train neural partial differential equations.Instead of being fully defined by neural networks, I will also be showcasing how you can selectively make parts of a differential equation neuralitized and other parts pre-defined, something we've been calling mixed neural differential equations, so we'll demonstrate a mixed neural jump stochastic differential equation and a mixed neural partial differential equation with fancy GPU-accelerated adaptive etc. methods. I'll then leave as homework how to train a mixed neural jump stochastic partial differential equation with the fanciest methods, which should be easy to see from this blog post (so yes, that will be the MIT 18.337 homework). This blog post will highlight that these equations are all already possible within our framework, and will also show the specific places we see that we need to accelerate to really put these types of models into production.Neural Jump Stochastic Differential Equations (Jump Diffusions)To get to jump diffusions, let's start with a stochastic differential equation. A stochastic differential equation is defined via$dX_t = f(t,X_t)dt + g(t,X_t)dW_t$which is essentially saying that there is a deterministic term $f$ and a… Read and Review the full paper at TheWinnower.com
PubDate: Wed, 05 Jun 2019 12:13:16 -040

• Science Discussion Series: Batteries seem to power everything today- cell
phones, cars, homes, even airplanes! We are a team of scientists and
engineers working on batteries and energy storage, let's discuss!

• Authors: (ScienceModerator
PubDate: Tue, 30 Apr 2019 09:51:06 -040

• Science discussion series: We are an interdisciplinary group of water
science professionals and we’re here to discuss safe drinking water. Ask
us anything!

• Authors: (ScienceModerator
Will Logan (u/Will_Logan_ICIWaRM) is currently the Director at the International Center for Integrated Water Resources Management (ICIWaRM), which is part of the U.S. Army Corps of Engineers. Previously, Will was the Science Attaché for the US Mission to UNESCO and he served for almost a decade on the Water, Science, and Technology Board at the National Academies of Sciences. Will holds a Ph.D. in Earth Sciences/Hydro-geology from Waterloo University and was an Assistant Professor of Hydro-geology at George Washington University.
Ellen de Guzman (u/Ellen_de_Guzman) is currently the Senior Water Officer in the Middle East and North Africa Bureau at USAID. Ellen has managed projects spanning rural reconstruction, humanitarian and disaster response, alternative livelihoods, food security, agriculture, water and sanitation. Prior to USAID, Ellen worked for the National Academies of Sciences, where she provided policy research support to develop federal policies on managing subsurface water contamination, the Clean Water Act, sustainable water and environmental management in the California Bay-Delta, and invasive species in ballast water.
Jin Shin (u/Jin_Shin_WSSC) is currently the Water Quality Division Manager at WSSC (Washington Suburban Sanitary Commission), where he has worked for nearly 15 years. The WSSC is one of the largest water and wastewater utilities in the nation, with a service area that spans nearly 1,000 square miles in Prince George’s and Montgomery counties in Maryland. Jin holds a Ph.D. in Environmental Engineering from John Hopkins University, where he was also a lecturer and visiting professor for 6 years.
Teddi Ann Galligan (u/Teddi_Ann_Galligan) is a community science educator. She draws from firsthand experience living in conditions where safe drinking water was a daily issue, as well as substantial laboratory experience, which includes wastewater analysis for a sustainable sanitation digestion technology, water quality analysis, and clinical laboratory work in low-resource settings. Currently Director of Covalence Science Education, Ms. Galligan has designed and delivered hands-on programs in a wide variety of environments, ranging from classrooms in the United States to open-air community science workshops in Port-au-Prince, Haiti. Teddi Ann was an educator and consultant at the Marian Koshland Science Museum of the National Academy of Sciences for more than a decade, helping visitors use science to address real world community resilience issues associated with climate change.Our guests will be answering questions starting at 8:30 PM EST. Read and Review the full paper at TheWinnower.com
PubDate: Wed, 03 Apr 2019 15:51:34 -040

• Hi, I'm Alan Smith, Data visualisation editor at the Financial Times. I've
just finished an experimental project at the FT to both visualise and
sonify the historical yield curve - a large dataset of over 100,000 data
points. AMA!

• Authors: (financialtimes
Abstract: Hi, I'm Alan Smith, Data visualisation editor at the Financial Times. I've just finished an experimental project at the FT to both visualise and sonify the historical yield curve - a large dataset of over 100,000 data points. I've filmed a step-by-step walkthrough of the project. And the end product, a combined animated data visualisation and sonification of four decades of the US yield curve, is available on YouTube https://www.youtube.com/watch'v=GoQBWcNw6IU . My full article is on the FT, website: ft.com/music-from-dataMy work has also coincided with the the release of a new open source tool funded by Google* that allows users to make music from spreadsheets. So - is data sonification ready to be the next big thing in data presentation' Can it bring data to new audiences such as including the blind/visually impaired, podcast listeners, and those accessing the web via screenless devices with voice interfaces. Or is it a simple novelty' Ask me anything!
TwoTone app funded by Google (https://app.twotone.io/)Proof: https://i.redd.it/pmafgrjd94n21.jpg Read and Review the full paper at TheWinnower.com
PubDate: Thu, 21 Mar 2019 11:50:42 -040

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762