Radiative cooling and heating rates are computed on an element-by-element basis by interpolating tables, generated with cloudy version This floor does not apply to low-density i. Star particles are treated as simple stellar populations with an IMF of the form proposed by Chabrier , spanning the range 0. At each time step and for each stellar particle, those stellar masses reaching the end of the main-sequence phase are identified using metallicity-dependent lifetimes, and the fraction of the initial particle mass reaching this evolutionary stage is used, together with the particle initial elemental abundances and nucleosynthetic yield tables, to compute the mass of each element that is lost through winds from AGB stars, winds from massive stars, and Type II SNe.
These processes are particularly important for massive, short-lived stars and, if star formation is sufficiently vigorous, the associated feedback can drive large-scale galactic outflows e. At present, simulations of large cosmological volumes lack the resolution necessary to model the self-consistent development of outflows from feedback injected on the scales of individual star clusters, and must appeal to a subgrid treatment.
In the simplest implementation of energy feedback by thermal heating, the energy produced at each time step by a star particle is distributed to a number of its neighbouring hydrodynamic resolution elements, supplementing their internal energy. The resulting temperature increment is then far smaller than in reality, and by extension the radiative cooling time of the heated gas is much too short. Pressure gradients established by the heating are too shallow and, perhaps more importantly, are typically erased on a radiative time-scale shorter than the sound crossing time of a resolution element.
Besides enabling one to mitigate the problem described above, a stochastic implementation of feedback is advantageous because it enables the quantity of energy injected per feedback event to be specified, even if the mean quantity of energy injected per unit stellar mass formed is fixed. This mechanism is a means of modelling the subgrid radiative losses that are not addressed by our simple treatment of the ISM.
Because these losses should depend on the physical conditions in the ISM, there is physical motivation to specify f th as a function of the local properties of the gas. Primarily, it is the freedom to adjust f th that enables the simulations to be calibrated. Besides regulating the growth of the BHs, AGN feedback quenches star formation in massive galaxies and shapes the gas profiles of their host haloes.
The implementation adopted here consists of two elements, namely i a prescription for seeding galaxies with BHs and for following their growth via mergers and gas accretion, and ii a prescription for coupling the radiated energy, liberated by BH growth, to the ISM. Calculations of BH properties are therefore functions of m BH , whilst gravitational interactions are computed using the particle mass.
When the subgrid BH mass exceeds the particle mass, BH particles stochastically accrete neighbouring gas particles such that particle and subgrid BH masses grow in concert. The relative velocity threshold prevents BHs from merging during the initial stages of galaxy mergers. However, larger values also make the feedback more intermittent. In general, the ambient density of gas local to the central BH of galaxies is greater than that of star-forming gas distributed throughout their discs, so a higher heating temperature is required to minimize numerical losses.
This higher temperature increment was also found to be necessary in high-resolution simulations, since they resolve higher ambient densities close to BHs and hence exhibit higher cooling rates. Parameters that are varied in the simulations. Columns are: the side length of the volume L and the particle number per species i. It was therefore argued that cosmological hydrodynamical simulations are presently unable to yield ab initio predictions of the stellar mass of galaxies, nor the mass of their central BH. Subgrid parameters should therefore be calibrated such that simulations reproduce desired diagnostic quantities, stellar and BH masses being germane examples.
The optimal approach to calibrating subgrid models is not unambiguous, since there can be multiple measurable outcomes that are sensitive to the adjustment of subgrid parameters, some or all of which might reasonably be considered valid constraints. However, these quantities remain ill-characterized observationally, and are sensitive to the physical scale on which they are measured which is generally not even well known. Reproducing the properties of outflows on a particular spatial scale offers no guarantee that they are reproduced on other scales, since, for example, the interaction of outflows with the circumgalactic medium may be inadequately modelled.
The choice of calibration diagnostic s is therefore somewhat arbitrary, but some choices can be more readily motivated. Clearly, it is necessary that any diagnostic be well-characterized observationally on the scales resolved by the simulation. In addition, it is desirable to confront calibrated models with complementary observational constraints, to minimize the risk of overlooking modelling degeneracies.
An additional motivation for appealing to the GSMF as the calibration diagnostic is that its reproduction by the simulations is a prerequisite for examination of many observable scaling relations. Whilst calibrating the simulations, attention was also paid to the sizes of galaxies. The formation of disc galaxies with realistic sizes in cosmological simulations has proven to be a non-trivial challenge, leading to the identification and rectification of many shortcomings of numerical techniques e.
It cannot be assumed that the sizes of galaxy discs will be accurately reproduced by simulations, even if they successfully reproduce a suitable calibration diagnostic. For this reason, we require a model to reproduce both the GSMF and the observed size—mass relation of disc galaxies 7 at low redshift, in order for the calibration process to be deemed successful. Therefore, energy feedback associated with star formation is calibrated exclusively by varying f th , the fraction of the total available energy from Type II SNe that couples to the ISM.
Since this mass scale is weakly dependent upon C visc , models can adopt values of this parameter that differ by orders of magnitude. Semi-analytic models are built upon the framework of dark matter halo merger trees, so the parameters they adopt for governing feedback must be coupled to simplified models for the structure of galaxies and their interstellar and circumgalactic media. In contrast, hydrodynamical simulations enable feedback properties to be specified, if so desired, based on the physical conditions local to newly formed star particles.
This implementation is well-motivated since, by temporarily decoupling winds and specifying their properties as a function of dark matter properties, it affords simulations the best opportunity to achieve numerical convergence.
- The Very First Vampire (In My Daughters Mind Book 3)!
- Upload an extension!
- The Last Frontier.
However, it precludes the full exploitation of the hydrodynamics calculation. The physical properties of outflows are almost certainly dependent upon the local baryonic conditions of the ISM, and these properties are available to use as inputs to subgrid feedback models. Since the philosophy adopted for the EAGLE project is to calibrate the feedback scheme, the convergence demands placed upon the adopted subgrid models are relaxed, presenting the appealing opportunity to couple the value of subgrid parameters e.
Four calibrated simulations are explored, each featuring energy feedback associated with both star formation and the growth of BHs. There are two means by which finite resolution introduces artificial losses. The second stems from the inability of large cosmological simulations to model the formation of the earliest generation of stars. The simplest model injects into the ISM a fixed quantity of energy per unit stellar mass formed, independent of local conditions. Although the injected energy is independent of local conditions, scale-dependent macroscopic radiative losses can none the less develop self-consistently, for example due to differences in the metallicity and hence cooling rate of outflowing gas, the ram pressure at the interface of the disc and the circumgalactic medium, or the depth of the potential well.
This model therefore provides a baseline against which it is possible to assess the degree to which the overall physical losses need to be established by calibrating losses on subgrid scales. However, we will see later that it does do so, but fails to reproduce the observed sizes of disc galaxies. This section begins with an examination of the calibrated simulations. As discussed by S15 , the GSMF and galaxy sizes were used for the calibration process, so are not presented as predictions. The aim of this exercise is to examine how different implementations of physical processes impact upon the properties of galaxies and their environments.
As part of this procedure, the conditions of the ISM from which stars are born are also explored. Maintaining the convention established by S15 , curves are drawn with dotted lines where galaxies are less massive than initial mass, m g baryonic particles, as resolution tests presented in S15 indicate that sampling errors due to finite resolution are significant in this regime.
In both cases, the data have been adjusted for consistency with the value of the Hubble parameter adopted by the simulations. Curves are drawn with dotted lines where galaxies are comprised of fewer than star particles, and dashed lines where the GSMF is sampled by fewer than 10 galaxies per bin. This precision is comparable to the systematic uncertainty associated with spectrophotometric techniques for inferring galaxy stellar masses, indicating that a more precise reproduction of the GSMF may be unwarranted 11 e.
This degree of consistency with observational data is typical of that associated with semi-analytic galaxy formation models.
Firefly genomes illuminate parallel origins of bioluminescence in beetles
Since running multiple LN simulations is, at present, computationally prohibitive, the calibrated models have been run with LN initial conditions. The scaling of this quantity with stellar mass for disc galaxies is shown in Fig. The binned median is plotted for each calibrated simulation. Sizes from the Ref-LN presented by S15 are also shown yellow curve , to demonstrate consistency between volumes. The reference model tracks the observed relation closely, with the medians of the simulation and the observational measurements being offset by 0.
Also shown in Fig. For this reason, these models are not considered satisfactory when applying the EAGLE calibration criteria. We can therefore be confident that our conclusions here are unlikely to be strongly affected by, for example, attenuation by interstellar dust. The calibrated simulations adopt identical initial conditions, enabling galaxies to be compared individually as well as statistically. In Fig. The same mapping between physical flux and pixel luminosity is adopted in each panel.
The galaxy is shown face-on left-hand panels and edge-on right-hand panels , oriented about the angular momentum vector of the star particles comprising the galaxy. The overall size of the optical envelope of the galaxy is similar in each case, but the distribution and dynamics of the stars differ markedly between Ref and the other models. In the latter, the galaxy forms an unrealistically compact bulge component at early epochs and exhibits too little ongoing star formation blue-coloured concentrations in the extended disc.
Consequently, only in Ref does the galaxy exhibit an effective radius, R 50 , that is consistent with the observed size—mass relation for disc galaxies. However, the distribution of stars within that radius differs markedly between Ref and the other models, and this strongly influences the effective radius. In the Ref simulation the star-forming disc component, seen clearly in the face-on images as blue-coloured concentrations distributed over all radii, comprises a greater fraction of the mass. It is shown that models that yield unrealistically compact galaxies also fail to reproduce the observed star formation history and present-day SSFR of the galaxy population.
We demonstrate that the formation of compact galaxies is a consequence of numerical radiative losses becoming severe in high-density gas, thus artificially suppressing the efficiency of energy feedback. The evolution of stellar mass density in the four simulations can differ, however, because the history of energy injection from feedback varies between the models. Data points represent the comoving stellar mass density inferred from a number of complementary observational analyses. Where necessary, the data have been adjusted to adopt the same IMF and Hubble parameter as the simulations.
Diamonds represent measurements over the redshift interval 0. Data from surveys that overlap in redshift interval are shown in order to illustrate, broadly, the degree of systematic uncertainty and field-to-field variance in the measurements. The FBconst simulation, however, forms stars too rapidly at early epochs.
The models that allow the star formation feedback efficiency to vary as a function of the local environment track the observed build up of stellar mass more accurately, since they typically inject more energy per unit stellar mass formed into star-forming regions in low-mass galaxies which dominate at high redshift , than is the case for FBconst. As per Fig. The dashed horizontal line denotes the SSFR separating star-forming and passive galaxies.
This remains the case for Ref-LN However, as shown by S15 , the discrepancy for low-mass galaxies is much smaller for the Recal-LN simulation, indicating that our high-resolution simulations are better able to reproduce the star formation properties of low-mass galaxies. The FBZ model behaves similarly to Ref. In the framework of equilibrium galaxy formation models e. This leaves differences in the mass reaccretion rate and the efficiency of preventive feedback as prime candidates for establishing an offset in the present-day SSFR of low-mass galaxies.
It is indeed likely that the reaccretion of ejected gas is sensitive to the details of the feedback e. The efficiency of preventive feedback is, by construction, a distinguishing feature of the four calibrated models, and one that is simple to explore. The properties of simulated galaxies are clearly sensitive to the adopted functional form of f th. Galaxy sizes, which encode information related to the state of the gas from which stars were born, are the clearest discriminator of the models explored here, indicating a connection between the structure of the ISM and the efficacy of feedback.
When star particles are born, we record the density of their parent gas particle, enabling an examination of the physical conditions of the gas from which all stars in the simulations were born. The EAGLE simulations treat star-forming gas as a single-phase fluid, therefore the SPH density of star-forming particles can be considered as the mass-weighted average of the densities of cold, dense molecular clouds and of the warm, ionized medium with which they maintain a pressure equilibrium. Pressure is therefore a more physically meaningful property of star-forming gas in the simulations, and it is possible to recover the birth pressure of stars from their birth density under the reasonable assumption that their parent gas particle resided on the Jeans-limiting pressure floor at the time of conversion.
Examination of this ratio affords us a means by which to test for numerical overcooling on an event-by-event basis. Many stars form from gas with pressures close to this threshold value. Differential distribution of the pressure of fluid elements i. The development of this peak is suppressed in the Ref simulation owing to the greater efficiency of feedback at higher density for fixed metallicity , which ensures that the majority of stars are able to yield numerically efficient feedback.
This initial problem has the potential to set in train a cycle of overcooling: the artificially rapid initial formation of stars over-enriches the ISM with efficient coolants, promoting further cooling losses and enabling dissipation to higher densities. An initial numerical shortcoming therefore has the potential to trigger unrealistic physical losses that themselves promote further numerical losses. This cycle can lead to a strong overestimate of the severity of radiative losses. This injection of additional energy into nascent galaxies is, however, insufficient to arrest the onset of subsequent numerical losses.
At such high density, resolution elements heated to 10 7. It ensures that star formation feedback remains efficient when stars form from relatively dense gas, preventing the build-up of the highest pressures within the ISM. A density dependence of this sort can also be motivated on physical grounds e. The correspondence between numerically inefficient feedback and the formation of unrealistically compact galaxies is clear: the highest birth pressures are exhibited by stars residing in the centres of galaxies. The dotted vertical line shows the gravitational softening scale.
Conversely, the suppression of high birth densities in Ref yields effective radii that are consistent with observations. It is important to quantify the sensitivity of the outcomes of this model to variation of its key subgrid parameters. Parameters governing the ISM, star formation and the efficiency of star formation feedback are tested using relatively inexpensive LN simulations. Those governing the AGN feedback are tested with LN simulations, since the effects of changing these parameters are most clearly imprinted upon the properties of massive galaxies and their environments.
The effects of reasonable changes can vary markedly from parameter to parameter, so we focus here only on properties of the galaxy population that shift significantly from those of the corresponding Ref simulation. Simulations where the variation does have a significant effect are likely to yield a galaxy population that is no longer an accurate representation of the observed Universe.
Steeper slopes, i. Steeper slopes inhibit collapse, whilst shallower slopes may lead to artificial fragmentation e. Curves show the binned median ratios, and are drawn with dotted lines below the mass scale corresponding to baryonic particles, and a dashed line where sampled by fewer than 10 galaxies per bin.
There is a growing consensus, based on analyses of cosmological simulations, that galaxy formation and evolution is governed primarily by the regulation of the supply of gas to the ISM, rather than the detailed physics of interstellar gas itself e. In general, it is the feedback that regulates this fuel supply, so it is reasonable to expect that changing the efficiency of star formation feedback will have a significant impact on many characteristics of the galaxy population. This can be gauged by inspection of Fig.
Curves are drawn with dotted lines below the estimated resolution limit of each diagnostic, and with dashed lines where sampled by fewer than 10 galaxies per bin. The panels show a the GSMF; b the stellar mass to halo mass ratio of central galaxies; c the stellar half-mass radius, d the maximum circular velocity, e the SSFR of star-forming galaxies; f the fraction of galaxies that are passive; g the central BH mass; h the metallicity of star-forming gas and i the metallicity of stars. As in the previous plots, binned medians are shown WeakFB: green curve, Ref: dark blue curve, StrongFB: red curve , drawn with a dotted linestyle below the estimated resolution limit, specific to each diagnostic, and a dashed linestyle where there are fewer than 10 galaxies per bin.
For more details concerning the observations and their comparison with the simulations, see S Inspection of the panels highlights that relatively small changes i. The cause of this shift of number densities is clear from inspection of the stellar mass to halo mass relation, b. The form of the median relation is similar in all three cases, but with a significant normalization offset: in the WeakFB StrongFB model, galaxies acquire a lower higher characteristic value of f th throughout their growth. Since the dark matter halo mass function is particularly steep e.
Even on more massive scales, the adoption of very efficient star formation feedback dramatically reduces the abundance of galaxies, although on these scales the adoption of inefficient star formation feedback does not result in a commensurate increase in the abundance of galaxies. As explored below, in the absence of efficient star formation feedback, BHs simply grow more massive at fixed halo mass in order to liberate the energy required to achieve self-regulation.
The shift of the typical halo mass associated with galaxies of fixed stellar mass, as the feedback efficiency is varied, impacts significantly upon galaxy scaling relations, as we explore below. Consistent with that discussion, the impact of the star formation feedback efficiency upon the sizes of galaxies is evident in panel c.
The adoption of WeakFB leads to significant overcooling in a physical sense , and results in artificially compact galaxies. In the regime of less significant overcooling, the efficiency of feedback still impacts upon galaxy sizes: feedback preferentially ejects the lowest angular momentum gas in galaxies e. Changing the efficiency of star formation feedback impacts upon both the zero-point and the slope of the relation. To first-order V max is a reasonable proxy for halo mass, so changing the star formation feedback efficiency essentially shifts the stellar mass associated with a fixed V max , i.
However, the slope also changes because star formation feedback impacts most significantly upon low-mass galaxies, as is also clear from inspection of b. Moreover, in high-mass galaxies weak feedback can result in an increase of V max due to the formation of compact bulges. The SSFR as a function of stellar mass is shown in panel e , and highlights that the adoption of weaker stronger star formation feedback leads to lower higher SSFR at fixed stellar mass.
At first glance this may appear to contradict the interpretation of the relations presented in Fig. There is, however, an important distinction with respect to the calibrated simulations: galaxies of a fixed stellar mass in the WeakFB and StrongFB models are not associated with haloes of a similar mass to their counterparts in Ref.
The association of galaxies of fixed stellar mass with less more massive haloes in the WeakFB StrongFB model leads to them experiencing lower higher infall rates both from the cosmological accretion of intergalactic gas, and the reaccretion of ejected material. Increasing the efficiency of feedback enables galaxies to balance a fixed net inflow rate at a lower SFR, but this is insufficient to compensate for the increased inflow rate that stems from being associated with a more massive halo.
The impact of changing the star formation feedback efficiency is not a simple shift in the normalization of passive fraction as a function of stellar mass, since approximately half of all galaxies in the stellar mass range examined are classified as passive in WeakFB. The adoption of weak star formation feedback results in a dramatically greater passive fraction for two reasons.
First, galaxies consume a greater fraction of their low entropy gas for star formation, and do so at early times, reducing the reservoir of cold gas available for star formation at the present epoch. The second, and more important effect, is that changing the efficiency of star formation feedback also impacts upon the relationship between the masses of galaxies and their central BHs, as shown in panel g.
However, this reasoning neglects another consequence of changing the star formation feedback efficiency: in order to achieve self-regulation, BHs in the WeakFB StrongFB case must compensate for the lower higher star formation feedback efficiency by liberating more less AGN feedback energy. Finally, panels h and i show the metallicities of the ISM specifically, star-forming particles and stars, respectively, as a function of stellar mass.
The observation of a similar anticorrelation between baryonic mass and metal-loss in local galaxies is widely perceived as convincing evidence for the ubiquity of outflows, and of their efficiency as a mechanism to transport metals away from the ISM e. In summary, the efficiency of feedback associated with star formation has a strong effect on a broad range of galaxy scaling relations. This is primarily, but not exclusively, because a factor of 2 changes to the efficiency significantly alter the relationship between stellar mass and halo mass.
The parameter C visc is related to the inverse of the viscosity of a notional subgrid accretion disc, and has two effects. First, it governs the angular momentum scale at which the accretion switches from the relatively inefficient viscosity-limited regime to the Bondi-limited regime with both cases being subject to the Eddington limit.
Secondly, it governs the rate at which gas transits through the accretion disc when the viscosity-limited regime applies. A higher subgrid viscosity, which corresponds to a lower value of the viscosity parameter C visc , therefore leads to an earlier onset of the dominance of AGN feedback, and a greater energy injection rate when in the viscosity-limited regime. Curves are drawn with dotted lines below the estimated resolution limit of each diagnostic, and dotted lines where sampled by fewer than 10 galaxies per bin.
The slow initial growth stems from several physical causes. Growth by mergers with seed mass BHs is inefficient, simply because the integrated mass of seeds encountered by any given BH is small see fig. The significance of the latter is clear from inspection of panel a , which shows that the characteristic mass at which BHs begin to grow efficiently is sensitive to C visc. Once gas accretion is efficient, BHs grow rapidly, because the feedback liberated by their growth is initially unable to regulate the accretion rate. Once sufficiently massive, however, BHs become able to regulate, or even quench, their own growth by gas accretion.
Therefore the stellar mass at which BHs arrive on the scaling relation is less sensitive to C visc than the stellar mass at which BHs begin to accrete efficiently. However, the three simulations do not converge to the same relation, indicating that accretion on to the BH remains partially viscosity-limited in even the most massive BHs. The role of the subgrid viscosity as a means to calibrate the simulations is clear from panel b.
By shifting the stellar mass scale at which BHs begin to self-regulate, the assumed viscosity effectively controls the halo mass scale at which AGN feedback becomes significant. In the previous section, we concluded that the size of low-to-intermediate mass galaxies is sensitive to the efficiency of feedback associated with star formation i. Star formation in more massive galaxies is regulated primarily by AGN feedback, so their sizes are more sensitive to the parameters governing AGN feedback, as shown in panel c. Delaying the onset of efficient AGN feedback with a low subgrid viscosity results in the delivery of gas to the ISM being countered by star formation feedback alone.
Some degree of overcooling is therefore to be expected, a symptom of which is the formation of a massive, compact bulge component that reduces the effective radius of the galaxy. As for adjustments to f th , the change in the typical halo mass associated with galaxies of a fixed stellar mass also affects the sizes of galaxies, but here the effect is weaker, and limited to massive galaxies. They concluded that the higher heating temperature, which yields more energetic but less frequent AGN feedback episodes, was necessary to reproduce the gas fractions and X-ray luminosities of galaxy groups.
We make theoretical and methodological contributions to the CHI community by introducing comparisons between contemporary Critical Heritage research and some forms of experimental design practice. Beginning by identifying three key approaches in contemporary heritage research: Critical Heritage, Plural Heritages and Future Heritage we introduce these in turn, while exploring their significance for thinking about design, knowledge and diversity. We discuss our efforts to apply ideas integrating Critical Heritage and design through the adoption of known Research through Design techniques in a research project in Istanbul, Turkey describing the design of our study and how this was productive of sensory and speculative reflection on the past.
Finally, we reflect on the usefulness of such methods in developing new interactive technologies in heritage contexts and go on to propose a series of recommendations for a future Critical Heritage Design practice. Immersive open-ended museum exhibits promote ludic engagement and can be a powerful draw for visitors, but these qualities may also make learning more challenging. We used an iterative design process and qualitative methods to explore how and if visitors could 1 access and 2 comprehend the data visualizations, 3 reflect on their prior engagement with the exhibit, 4 plan their future engagement with the exhibit, and 5 act on their plans.
We further discuss the essential design challenges and the opportunities made possible for visitors through data-driven reflection tools. Many traditional HCI methods, such as surveys and interviews, are of limited value when working with preschoolers. In this paper, we present anchored audio sampling AAS , a remote data collection technique for extracting qualitative audio samples during field deployments with young children. AAS offers a developmentally sensitive way of understanding how children make sense of technology and situates their use in the larger context of daily life.
AAS is defined by an anchor event, around which audio is collected. A sliding window surrounding this anchor captures both antecedent and ensuing recording, providing the researcher insight into the activities that led up to the event of interest as well as those that followed. We present themes from three deployments that leverage this technique. Based on our experiences using AAS, we have also developed a reusable open-source library for embedding AAS into any Android application.
Social play can have numerous health benefits but research has shown that not all multiplayer games are effective at promoting social engagement. Asymmetric cooperative games have shown promise in this regard but the design and dynamics of this unique style of play is not yet well understood.
In this study, we propose a framework for the design of tools to support teaching to children with disabilities. The framework provides the necessary stages for the development of tools hardware-based or software-based and must be adapted for a specific disability and educational goal.
Humans can estimate the shape of a wielded object through the illusory feeling of the mass properties of the object obtained using their hands. Even though the shape of hand-held objects influences immersion and realism in virtual reality VR , it is difficult to design VR controllers for rendering desired shapes according to the perceptions derived from the illusory effects of mass properties and shape perception. We propose Transcalibur, which is a hand-held VR controller that can render a 2D shape by changing its mass properties on a 2D planar area. We built a computational perception model using a data-driven approach from the collected data pairs of mass properties and perceived shapes.
This enables Transcalibur to easily and effectively provide convincing shape perception based on complex illusory effects. Our user study showed that the system succeeded in providing the perception of various desired shapes in a virtual environment. The light field display is created by a retro-reflective sheet that is mounted on the cylindrical quadcopter.
This creates a light field that naturally provides motion parallax and stereoscopy without requiring any headset nor stereo glasses. The system is currently one-directional: 2 small cameras mounted on the drone allow the remote user to observe the local scene. Complex virtual reality VR tasks, like 3D solid modelling, are challenging with standard input controllers. We propose exploiting the affordances and input capabilities when using a 3D-tracked multi-touch tablet in an immersive VR environment.
Observations gained during semi-structured interviews with general users, and those experienced with 3D software, are used to define a set of design dimensions and guidelines. Key aspects of the vocabulary are evaluated with users, with results validating the approach. We propose RotoSwype, a technique for word-gesture typing using the orientation of a ring worn on the index finger. RotoSwype enables one-handed text-input without encumbering the hand with a device, a desirable quality in many scenarios, including virtual or augmented reality. The method is evaluated using two arm positions: with the hand raised up with the palm parallel to the ground; and with the hand resting at the side with the palm facing the body.
BeamBand is a wrist-worn system that uses ultrasonic beamforming for hand gesture sensing. Using an array of small transducers, arranged on the wrist, we can ensem-ble acoustic wavefronts to project acoustic energy at spec-ified angles and focal lengths. This allows us to interro-gate the surface geometry of the hand with inaudible sound in a raster-scan-like manner, from multiple view-points. We use the resulting, characteristic reflections to recognize hand pose at 8 FPS.
In our user study, we found that BeamBand supports a six-class hand gesture set at Even across sessions, when the sensor is removed and reworn later, accuracy remains high: We describe our software and hardware, and future ave-nues for integration into devices such as smartwatches and VR controllers. People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies.
We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy BLE beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports.
Online deliberation offers a way for citizens to collectively discuss an issue and provide input for policymakers. The overall experience of online deliberation can be affected by multiple factors. We decided to investigate the effects of moderation and opinion heterogeneity on the perceived deliberation experience, by running the first online deliberation experiment in Singapore.
Our study took place in three months with three phases. In phase 1, our 2, participants answered a survey, that we used to create groups of different opinion heterogeneity. During the second phase, participants discussed about the population issue on the online platform we developed. We gathered data on their online deliberation experience during phase 3.
We found out that higher levels of moderation negatively impact the experience of deliberation on perceived procedural fairness, validity claim and policy legitimacy; and that high opinion heterogeneity is important in order to get a fair assessment of the deliberation experience. Recent years have seen interest in device tracking and localization using acoustic signals.
Further, tracking multiple concurrent acoustic transmissions from VR devices today requires sacrificing accuracy or frame rate. We present MilliSonic, a novel system that pushes the limits of acoustic based motion tracking.
Our core contribution is a novel localization algorithm that can provably achieve sub-millimeter 1D tracking accuracy in the presence of multipath, while using only a single beacon with a small 4-microphone array. Further, MilliSonic enables concurrent tracking of up to four smartphones without reducing frame rate or accuracy. Our evaluation shows that MilliSonic achieves 0. MilliSonic enables two previously infeasible interaction applications: a 3D tracking of VR headsets using the smartphone as a beacon and b fine-grained 3D tracking for the Google Cardboard VR system using a small microphone array.
Microtasks enable people with limited time and context to contribute to a larger task. In this paper we explore casual microtasking, where microtasks are embedded into other primary activities so that they are available to be completed when convenient. Participants were most likely to complete the writing microtasks during periods of the day associated with low focus, and would occasionally use them as a springboard to open the original document in Word.
These findings suggest casual microtasking can help people leverage spare micromoments to achieve meaningful micro-goals, and even encourage them to return to work. While there is widespread recognition of the need to provide people with vision impairments PVI equitable access to cultural institutions such as art galleries, this is not easy. We present the results of a collaboration with a regional art gallery who wished to open their collection to PVIs in the local community.
We describe a novel model that provides three different ways of accessing the gallery, depending upon visual acuity and mobility: virtual tours, self-guided tours and guided tours. As far as possible the model supports autonomous exploration by PVIs. It was informed by a value sensitive design exploration of the values and value conflicts of the primary stakeholders.
Existing co-design methods support verbal children on the autism spectrum in the design process, while their minimally-verbal peers are overlooked. These emphasise the rich detail that can be conveyed in the moment, through recognising occurrences of, for example, Joint Attention, Turn Taking and Imitation. We worked in an autism-specific primary school over 20 weeks with ten children, aged 5 to 8.
We co-designed a playful prototype, the TangiBall, using the three iterative phases of CDBW; the Foundation Phase preparation for interaction , the Interaction Phase designing-and-reflecting in the moment and the Reflection Phase reflection-on-action. We contribute a novel co-design approach and present moments of interaction, the micro instances in design in which minimally-verbal children on the spectrum can convey meaning beyond words, through their actions, interactions, and attentional foci.
These moments of interaction provide design insight, shape design direction, and reveal unique strengths, interests, and abilities. Finally, the emotional utility of each encountered thread was rated while looking over a recording of the interaction. We report that Facebook browsing was, overall, an emotionally positive experience; that recall of threads exhibited classic primacy and recency serial order effects; that recalled threads were both more positive and more valenced less neutral on average, than forgotten threads; and that overall emotional valence judgments were predicted, statistically, by the peak and end thread judgments.
We find no evidence that local quit decisions were driven by the emotional utility of threads. In the light of these findings, we discuss the suggestion that emotional utility might partly explain the attractiveness of reading the news feed, and that an emotional memory bias might further increase the attractiveness of the newsfeed in prospect. While repair work has recently been getting increasing attention in HCI, recycling practices have still remained relatively understudied, especially in the context of the Global South.
In doing so, this paper offers the work of the bhangaris through an articulation of their hands and their uses. Drawing from a rich body of scholarly work on social science, we define and contextualize three characteristics of the hand of a bhangari: knowledge, care, and skills and collaboration. Our study also highlights the pains and sufferings involved in this profession. Interacting with a smartphone using touch input and speech output is challenging for visually impaired people in mobile and public scenarios, where only one hand may be available for input e.
To address these issues, we propose EarTouch, a one-handed interaction technique that allows the users to interact with a smartphone using the ear to perform gestures on the touchscreen. Users hold the phone to their ears and listen to speech output from the ear speaker privately. We report how the technique was designed, implemented, and evaluated through a series of studies. Results show that EarTouch is easy, efficient, fun and socially acceptable to use. In countries where languages with non-Latin characters are prevalent, people use a keyboard with two language modes namely, the native language and English, and often experience mode errors.
In the studies considering Korean-English dual input, Auto-switch was ineffective. On the contrary, Preview significantly reduced the mode errors from Public sharing is integral to online platforms. This includes the popular multimedia messaging application Snapchat, on which public sharing is relatively new and unexplored in prior research. In mobile-first applications, sharing contexts are dynamic.
As platforms increasingly rely on user-generated content, it is important to also broadly understand user motivations and considerations in public sharing. We explored these aspects of content sharing through a survey of 1, Snapchat users. Our results indicate that users primarily have intrinsic motivations for publicly sharing Snaps, such as to share an experience with the world, but also have considerations related to audience and sensitivity of content. Additionally, we found that Snaps shared publicly were contextually different from those privately shared.
Our findings suggest that content sharing systems can be designed to support sharing motivations, yet also be sensitive to private contexts. We present Cluster Touch, a combined user-independent and user-specific touch offset model that improves the accuracy of touch input on smartphones for people with motor impairments, and for people experiencing situational impairments while walking.
Cluster Touch combines touch examples from multiple users to create a shared user-independent touch model, which is then updated with touch examples provided by an individual user to make it user-specific. Owing to this combination, Cluster Touch allows people to quickly improve the accuracy of their smartphones by providing only 20 touch examples. In a user study with 12 people with motor impairments and 12 people without motor impairments, but who were walking, Cluster Touch improved touch accuracy by Furthermore, in an offline analysis of existing mobile interfaces, Cluster Touch improved touch accuracy by 8.
While prior research has revealed the promising impact of concept mapping on learning, few have comprehensively modeled different cognitive behaviors during concept mapping. In addition, existing concept mapping tools lack effective feedback to support better learning behaviors. This work presents MindDot, a concept map-based learning environment that facilitates the cognitive process of comparing and integrating related concepts via two forms of support.
A hyperlink support and an expert template. Study results suggested that both types of support had positive impact on the development of comparative strategies and that hyperlink support enhanced learning. We further evaluated the cognitive learning progress at a fine-grained level with two forms of visualizations.
We then extracted several behavioral patterns that provided insights about the cognitive progress in learning. Conventional hearing aids frame hearing impairment almost exclusively as a problem. To this end, we developed a method to speculate simultaneously about not-yet-experienced positive meanings and not-yet-existing technology. First, we gathered already existing activities in which divergent hearing was experienced as an advantage rather than as a burden.
The paper provides valuable insights into the interests and expectations of people with divergent hearing as well as a methodological contribution to a possibility-driven design. Failure is a common artefact of challenging experiences, a fact of life for interactive systems but also a resource for aesthetic and improvisational performance. We present a study of how three professional pianists performed an interactive piano composition that included playing hidden codes within the music so as to control their path through the piece and trigger system actions.
We reveal how apparent failures to play the codes occurred for diverse reasons including mistakes in their playing, limitations of the system, but also deliberate failures as a way of controlling the system, and how these failures provoked aesthetic and improvised responses from the performers. We propose that creative and performative interfaces should be designed to enable aesthetic failures and introduce a taxonomy that compares human approaches to failure with approaches to capable systems, revealing new creative design strategies of gaming, taming, riding and serving the system.
People with health concerns go to online health support groups to obtain help and advice. To do so, they frequently disclose personal details, many times in public. Although research in non-health settings suggests that people self-disclose less in public than in private, this pattern may not apply to health support groups where people want to get relevant help.
These channel effects probably occur because the public channels are the primary venue for support exchange, while the private channels are mainly used for follow-up conversations. We discuss theoretical and practical implications of our work. Playful technology has the potential to support physical activity PA among wheelchair users, but little is known about design considerations for this audience, who experience significant access barriers.
First, we present findings from an interview study with eight physically active wheelchair users. Second, we build on the interviews in a survey that received 44 responses from a broader group of wheelchair users. Results show that the anticipation of positive experiences was the strongest predictor of engagement with PA, and that accessibility concerns act as barriers both in terms of PA participation and technology use. We present four design goals — emphasizing enjoyment,involving others, building knowledge and enabling flexibility — to make our findings actionable for researchers and designers wishing to create accessible playful technology to support PA.
Vulnerability is a common experience in everyday life and is frequently perceived as a flaw to be excised in technology design. Yet, research indicates it is an essential aspect of wholehearted living among others. We describe the Research-through-Design process that helped us to discover and articulate the possibility space of vulnerability in the design of social wearables, as support for producing a sense of social empowerment and connection among wearers within the LARP.
We describe the design and deployment of Olly, a domestic music player that enables people to re-experience digital music they listened to in the past. FM listening history metadata archive to occasionally select a song from their past, but offers no user control over what is selected or when.
We deployed Olly in 3 homes for 15 months to explore how its slow pace might support experiences of reflection and reminiscence. Findings revealed that Olly became highly integrated in participants lives with sustained engagement over time. They drew on Olly to reflect on past life experiences and reactions indicated an increase in perceived value of their Last.
FM archive. Olly also provoked reflections on the temporalities of personal data and technology. Findings are interpreted to present opportunities for future HCI research and practice. Notions of what counts as a contribution to HCI continue to be contested as our field expands to accommodate perspectives from the arts and humanities. We designed a mobile neurofeedback app, called Mind-Full, based on existing design guidelines. Our goal was for young children in lower socio-economic status schools to improve their ability to self-regulate anxiety by using Mind-Full.
In this paper we report on quantitative outcomes from a sixteen-week field evaluation with 20 young children aged 5 to 8. Our methodological contribution includes using a control group, validated measures of anxiety and stress, and assessing transfer and maintenance. Thermoplastic and Fused Deposition Modeling FDM based 4D printing are rapidly expanding to allow for space- and material-saving 2D printed sheets morphing into 3D shapes when heated.
However, to our knowledge, all the known examples are either origami-based models with obvious folding hinges, or beam-based models with holes on the morphing surfaces. Morphing continuous double-curvature surfaces remains a challenge, both in terms of a tailored toolpath-planning strategy and a computational model that simulates it.
Additionally, neither approach takes surface texture as a design parameter in its computational pipeline. To extend the design space of FDM-based 4D printing, in Geodesy, we focus on the morphing of continuous double-curvature surfaces or surface textures. We suggest a unique tool path — printing thermoplastics along 2D closed geodesic paths to form a surface with one raised continuous double-curvature tiles when exposed to heat.
The design space is further extended to more complex geometries composed of a network of rising tiles i. Both design components and the computational pipeline are explained in the paper, followed by several printed geometric examples. When human musicians improvise together, a number of extra-musical cues are used to augment musical communication and expose mental or emotional states which affect musical decisions and the effectiveness of the collaboration.
We developed a collaborative improvising AI drummer that communicates its confidence through an emoticon-based visualisation. The AI was trained on musical performance data, as well as real-time skin conductance, of musicians improvising with professional drummers, exposing both musical and extra-musical cues to inform its generative process. Uni- and bi-directional extra-musical communication with real and false values were tested by experienced improvising musicians.
Each condition was evaluated using the FSS-2 questionnaire, as a proxy for musical engagement. The results show a positive correlation between extra-musical communication of machine internal state and human musical engagement. Collocated, face-to-face teamwork remains a pervasive mode of working, which is hard to replicate online.
However, the ready availability of sensors makes it increasingly affordable to instrument work spaces to study teamwork and groupwork. The possibility of visualising key aspects of a collaboration has huge potential for both academic and professional learning, but a frontline challenge is the enrichment of quantitative data streams with the qualitative insights needed to make sense of them. In response, we introduce the concept of collaboration translucence, an approach to make visible selected features of group activity. This is grounded both theoretically in the physical, epistemic, social and affective dimensions of group activity , and contextually using domain-specific concepts.
We illustrate the approach from the automated analysis of healthcare simulations to train nurses, generating four visual proxies that fuse multimodal data into higher order patterns. This paper investigates personalized voice characters for in-car speech interfaces. In particular, we report on how we designed different personalities for voice assistants and compared them in a real world driving study. Voice assistants have become important for a wide range of use cases, yet current interfaces are using the same style of auditory response in every situation, despite varying user needs and personalities.
To close this gap, we designed four assistant personalities Friend, Admirer, Aunt, and Butler and compared them to a baseline Default in a between-subject study in real traffic conditions. We discuss design aspects for voice assistants in different automotive use cases. Algorithmic decision-making systems are increasingly being adopted by government public service agencies. Researchers, policy experts, and civil rights groups have all voiced concerns that such systems are being deployed without adequate consideration of potential harms, disparate impacts, and public accountability practices.
Yet little is known about the concerns of those most likely to be affected by these systems. We report on workshops conducted to learn about the concerns of affected communities in the context of child welfare services. The workshops involved 83 study participants including families involved in the child welfare system, employees of child welfare agencies, and service providers.
Our findings indicate that general distrust in the existing system contributes significantly to low comfort in algorithmic decision-making. We identify strategies for improving comfort through greater transparency and improved communication strategies. We discuss the implications of our study for accountable algorithm design for child welfare applications. During sensemaking, people annotate insights: underlining sentences in a document or circling regions on a map.
They jot down their hypotheses: drawing correlation lines on scatterplots or creating personal legends to track patterns. We present ActiveInk, a system enabling people to seamlessly transition between exploring data and externalizing their thoughts using pen and touch. ActiveInk enables the natural use of pen for active reading behaviors, while supporting analytic actions by activating any of these ink strokes. Through a qualitative study with eight participants, we contribute observations of active reading behaviors during data exploration and design principles to support sensemaking.
Computer vision and pattern recognition are increasingly being employed by smartphone and tablet applications targeted at lay-users. An open design challenge is to make such systems intelligible without requiring users to become technical experts. This paper reports a lab study examining the role of visual feedback. Participants in our study showed a tendency to misunderstand the meaning being conveyed by the feedback, relating it to processing outcomes and higher level concepts, when in reality the feedback represented low level features.
Drawing on the experimental results and the qualitative data collected, we discuss the challenges of designing interactions around pattern matching algorithms. What makes a city meaningful to its residents? What attracts people to live in a city and to care for it? This theory offers ideas for developing community attachment, heightening the legibility of the city, and intensifying lived experiences in the city.
We add to this body of research with an analysis of several initiatives of City Yeast, a community-based design collective in Taiwan that proposes the metaphor of fermentation as an approach to placemaking. We unpack how this approach shapes their design practice and link its implications to urban informatics research in HCI. We suggest that smart cities can also be pursued by leveraging the knowledge of city residents and helping to facilitate their participation in acts of perceiving, envisioning, and improving their local communities, including but not limited to smart and connected technologies.
Through a design-led inquiry focused on smart home security cameras, this research develops three key concepts for research and design pertaining to new and emerging digital consumer technologies. Digital leakage names the propensity for digital information to be shared, stolen, and misused in ways unbeknownst or even harmful to those to whom the data pertains or belongs. Foot-in-the-door devices are product and services with functional offerings and affordances that work to normalize and integrate a technology, thus laying groundwork for future adoption of features that might have earlier been rejected as unacceptable or unnecessary.
Developed and illustrated through a set of design studies and explorations, this paper shows how these concepts may be used analytically to investigate issues such as privacy and security, anticipatorily to speculate about the future of technology development and use, and generatively to synthesize design concepts and solutions. To investigate preferences for mobile and wearable sound awareness systems, we conducted an online survey with DHH participants.
The survey explores how demographic factors affect perceptions of sound awareness technologies, gauges interest in specific sounds and sound characteristics, solicits reactions to three design scenarios smartphone, smartwatch, head-mounted display and two output modalities visual, haptic , and probes issues related to social context of use. While most participants were highly interested in being aware of sounds, this interest was modulated by communication preference—that is, for sign or oral communication or both. Other findings related to sound type, full captions vs.
However, their invisible nature with no or limited visuals makes it difficult for users to interact with unfamiliar VUIs. We analyze the impact of user characteristics and preferences on how users interact with a VUI-based calendar, DiscoverCal. While recent VUI studies analyze user behavior through self-reported data, we extend this research by analyzing both VUI usage data and self-reported data to observe correlations between both data types.
Difficulties in accessing, isolating, and iterating on the components and connections of a printed circuit board PCB create unique challenges in PCB debugging. Manual probing methods are slow and error prone, and even dedicated PCB testing equipment remains limited by its inability to modify the circuit during testing. We present Pinpoint, a tool that facilitates in-circuit PCB debugging through techniques such as programmatically probing signals, dynamically disconnecting components and subcircuits to test in isolation, and splicing in new elements to explore potential modifications.
The contours of user experience UX design practice have been shaped by a diverse array of practitioners and disciplines, resulting in a diffuse and decentralized body of UX-specific disciplinary knowledge. The rapidly shifting space that UX knowledge occupies, in conjunction with a long-existing research-practice gap, presents unique challenges and opportunities to UX educators and aspiring UX designers.
Specifically, we used natural language processing techniques and qualitative content analysis to identify a disciplinary vocabulary invoked by UX designers in this online community, as well as conceptual trajectories spanning over nine years which could shed light on the evolution of UX practice. This assumption has not been systematically studied. We present an in-lab experiment and a Mechanical Turk study to examine the effects of integral and separable visual cues on the recall and comprehension of visualizations that are accompanied by audio narration.
Eye-tracking data in the in-lab experiment confirm that cues helped the viewers focus on relevant parts of the visualization faster. We found that in general, visual cues did not have a significant effect on learning outcomes, but for specific cue techniques e.
Based on these results, we discuss how presenters might select visual cues depending on the role of the cues and the visualization type. Mobile self-reports are a popular technique to collect participant labelled data in the wild. While literature has focused on increasing participant compliance to self-report questionnaires, relatively little work has assessed response accuracy. In this paper, we investigate how participant context can affect response accuracy and help identify strategies to improve the accuracy of mobile self-report data. In a 3-week study we collect over 2, questionnaires containing both verifiable and non-verifiable questions.
We find that response accuracy is higher for questionnaires that arrive when the phone is not in ongoing or very recent use. Furthermore, our results show that long completion times are an indicator of a lower accuracy. We offer actionable recommendations to assist researchers in their future deployments of mobile self-report studies. We present an assistive suitcase system, BBeep, for supporting blind people when walking through crowded environments.
BBeep uses pre-emptive sound notifications to help clear a path by alerting both the user and nearby pedestrians about the potential risk of collision. BBeep triggers notifications by tracking pedestrians, predicting their future position in real-time, and provides sound notifications only when it anticipates a future collision. We investigate how different types and timings of sound affect nearby pedestrian behavior. In our experiments, we found that sound emission timing has a significant impact on nearby pedestrian trajectories when compared to different sound types.
Based on these findings, we performed a real-world user study at an international airport, where blind participants navigated with the suitcase in crowded areas. We observed that the proposed system significantly reduces the number of imminent collisions. In recent years, research has revealed gender biases in numerous software products. But although some researchers have found ways to improve gender participation in specific software projects, general methods focus mainly on detecting gender biases — not fixing them. To help fill this gap, we investigated whether the GenderMag bias detection method can lead directly to designs with fewer gender biases.
In our 3-step investigation, two HCI researchers analyzed an industrial software product using GenderMag; we derived design changes to the product using the biases they found; and ran an empirical study of participants using the original product versus the new version. Autonomous driving provides new opportunities for the use of time during a car ride.
One such important scenario is working. We conducted a neuroergonomical study to compare three configurations of a car interior based on lighting, visual stimulation, sound regarding their potential to support productive work. We assessed participants? Our results show that a configuration with a large-area, bright light with high blue components, and reduced visual and auditory stimuli promote performance, quality, efficiency, increased concentration and lower cognitive workload.
Increased visual and auditory stimulation paired with linear, darker light with very few blue components resulted in lower performance, reduced subjective concentration, and higher cognitive workload, but did not differ from a normal car configuration. Our multi-method approach thus reveals possible car interior configurations for an ideal workspace. To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from.
To achieve this, our system combines three sensing modalities: 1 raw capacitance touchscreen images, 2 inertial motion, and 3 electric field sensors around the screen bezel for grasp and hand proximity detection. Often Virtual Reality VR experiences are limited by the design of standard controllers. This work aims to liberate a VR developer from these limitations in the physical realm to provide an expressive match to the limitless possibilities in the virtual realm. VirtualBricks is a LEGO based toolkit that enables construction of a variety of physical-manipulation enabled controllers for VR, by offering a set of feature bricks that emulate as well as extend the capabilities of default controllers.
We demonstrate the versatility of our designs through a rich set of applications including re-implementations of artifacts from recent research. We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other.
This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings. Full-coverage displays can place visual content anywhere on the interior surfaces of a room e.
In these settings, digital artefacts can be located behind the user and out of their field of view — meaning that it can be difficult to notify the user when these artefacts need attention. Although much research has been carried out on notification, little is known about how best to direct people to the necessary location in room environments.
How Is That Working?: A Roadmap from Rat Race to Freedom
We designed five diverse attention-guiding techniques for full-coverage display rooms, and evaluated them in a study where participants completed search tasks guided by the different techniques. Our study provides new results about notification in full-coverage displays: we showed benefits of persistent visualisations that could be followed all the way to the target and that indicate distance-to-target. Our findings provide useful information for improving the usability of interactive full-coverage environments. We compared four audio-based radar metaphors for providing directional stimuli to users of AR headsets.
The metaphors are clock face, compass, white noise, and scale. Each metaphor, or method, signals the movement of a virtual arm in a radar sweep. In a user study, statistically significant differences were observed for accuracy and response time. Beat-based methods clock face, compass elicited responses biased to the left of the stimulus location, and non-beat-based methods white noise, scale produced responses biased to the right of the stimulus location.
The beat methods were more accurate than the non-beat methods. However, the non-beat methods elicited quicker responses. We also discuss how response accuracy varies along the radar sweep between methods. These observations contribute design insights for non-verbal, non-visual directional prompting. As design thinking shifted away from conventional methods with the rapid adoption of computer-aided design and fabrication technologies, architects have been seeking ways to initiate a comprehensive dialogue between the virtual and the material realms.
Current methodologies do not offer embodied workflows that utilize the feedback obtained through a subsequent transition process between physical and digital design. Therefore, narrowing the separation between these two platforms remains as a research problem. This literature review elaborates the divide between physical and digital design, testing and manufacturing techniques in the morphological process of architectural form.
We first review the digital transformation in the architectural design discourse. Then, we proceed by introducing a variety of methods that are integrating digital and physical workflows and suggesting an alternative approach. Our work unveils that there is a need for empirical research with a focus on integrated approaches to create intuitively embodied experiences for architectural designers. Breastfeeding is not only a public health issue, but also a matter of economic and social justice.
This paper presents an iteration of a participatory design process to create spaces for re-imagining products, services, systems, and policies that support breastfeeding in the United States. Our work contributes to a growing literature around making hackathons more inclusive and accessible, designing participatory processes that center marginalized voices, and incorporating systems- and relationship-based approaches to problem solving.
Key to our re-imagining of conventional innovation structures is a focus on experience design, where joy and play serve as key strategies to help people and institutions build relationships across lines of difference. We conclude with a discussion of design principles applicable not only to designers of events, but to social movement researchers and HCI scholars trying to address oppression through the design of technologies and socio-technical systems. We introduce Project Sidewalk, a new web-based tool that enables online crowdworkers to remotely label pedestrian-related accessibility problems by virtually walking through city streets in Google Street View.
To train, engage, and sustain users, we apply basic game design principles such as interactive onboarding, mission-based tasks, and progress dashboards. In an month deployment study, online users contributed , labels and audited 2, miles of Washington DC streets. We compare behavioral and labeling quality differences between paid crowdworkers and volunteers, investigate the effects of label type, label severity, and majority vote on accuracy, and analyze common labeling errors. Our findings demonstrate the potential of virtually auditing urban accessibility and highlight tradeoffs between scalability and quality compared to traditional approaches.
The sport data tracking systems available today are based on specialized hardware high-definition cameras, speed radars, RFID to detect and track targets on the field. While effective, implementing and maintaining these systems pose a number of challenges, including high cost and need for close human monitoring. On the other hand, the sports analytics community has been exploring human computation and crowdsourcing in order to produce tracking data that is trustworthy, cheaper and more accessible.
However, state-of-the-art methods require a large number of users to perform the annotation, or put too much burden into a single user. We propose HistoryTracker, a methodology that facilitates the creation of tracking data for baseball games by warm-starting the annotation process using a vast collection of historical data. We show that HistoryTracker helps users to produce tracking data in a fast and reliable way.
Clinical psychology literature indicates that reframing ir- rational thoughts can help bring positive cognitive change to those suffering from mental distress. Through data from an online mental health forum, we study how these cognitive processes play out in peer-to-peer conversations. Using this definition, we propose a predictive model that can identify whether a conversation thread or a post is associated with a moment of cognitive change.
Consistent with psychological literature, we find that markers of language associated with sentiment and and affect are the most predictive. Further, cultural differences play an important role: predictive models trained on one country generalize poorly to others. To understand how a moment of change happens, we build a model that explicitly tracks topic and associated sentiment in a forum thread. We present explorable multiverse analysis reports, a new approach to statistical reporting where readers of research papers can explore alternative analysis options by interacting with the paper itself.
This approach draws from two recent ideas: i multiverse analysis, a philosophy of statistical reporting where paper authors report the outcomes of many different statistical analyses in order to show how fragile or robust their findings are; and ii explorable explanations, narratives that can be read as normal explanations but where the reader can also become active by dynamically changing some elements of the explanation. Based on five examples and a design space analysis, we show how combining those two ideas can complement existing reporting approaches and constitute a step towards more transparent research papers.
Certain video games show promise as tools for training spatial skills, one of the strongest predictors of future success in STEM. However, little is known about the gaming preferences of those who would benefit the most from such interventions: low spatial skill students. To provide guidance on how to design training games for this population, we conducted a survey of participants from three populations: online college-age, students from a low SES high school, and students from a high SES high school.
Participants took a timed test of spatial skills and then answered questions about their demographics, gameplay habits, preferences, and motivations. The only predictors of spatial skill were gender and population: female participants from online and low SES high school populations had the lowest spatial skill. In light of these findings, we provide design recommendations for game-based spatial skill interventions targeting low spatial skill students. Trust facilitates cooperation and supports positive outcomes in social groups, including member satisfaction, information sharing, and task performance.
Here, we build on past work to present a comprehensive framework for predicting trust in groups. Last, we demonstrate how group trust predicts outcomes at both individual and group level such as the formation of new friendship ties. Visualization tools facilitate exploratory data analysis, but fall short at supporting hypothesis-based reasoning. We conducted an exploratory study to investigate how visualizations might support a concept-driven analysis style, where users can optionally share their hypotheses and conceptual models in natural language, and receive customized plots depicting the fit of their models to the data.
We report on how participants leveraged these unique affordances for visual analysis. We found that a majority of participants articulated meaningful models and predictions, utilizing them as entry points to sensemaking. We contribute an abstract typology representing the types of models participants held and externalized as data expectations.
- Social Rights and Duties, Volume I (of 2) Addresses to Ethical Societies.
- Venere ed Imene al tribunale della penitenza: manuale dei confessori (Italian Edition)?
- Download PDF How Is That Working?: A Roadmap from Rat Race to Freedom?
- Guide Miss Saggybottoms Labyrinth of Doom.
Our findings suggest ways for rearchitecting visual analytics tools to better support hypothesis- and model-based reasoning, in addition to their traditional role in exploratory analysis. We discuss the design implications and reflect on the potential benefits and challenges involved. We have developed a pictorial multi-item scale, called P-SUS Pictorial System Usability Scale , which aims to measure the perceived usability of mobile devices.
A user-centred design process was employed to develop and refine its 10 pictorial items. Psychometric properties convergent validity, criterion-related validity, sensitivity, and reliability , as well as the motivation to fill in the scale were assessed. The results indicated satisfactory convergent validity for about two-thirds of the items. Furthermore, strong correlations were obtained for the sum scores between verbal and pictorial SUS, and the pictorial scale was perceived as more motivating than the verbal questionnaire.
The P-SUS represents a first attempt to provide a pictorial usability scale for the evaluation of mobile devices. Public commentary related to reality TV can be overwhelmed by thoughtless reactions and negative sentiments, which often problematically reinforce the cultural stereotyping typically employed in such media.
THIS WEEK IN COMICS! (3/21/12 – Like a dog!) | The Comics Journal
Our findings highlight how Screenr supported interrogation of the production qualities and claims of shows, promoted critical discourse around the motivations of programmes, and engaged participants in reflecting on their own assumptions and views. We situate our results within the context of existing second-screening co-viewing work, discuss implications for such technologies to support critical engagement with socio-political media, and provide design implications for future digital technologies in this domain.
Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance.
During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely e. HCI4D researchers and practitioners have leveraged voice forums to enable people with literacy, socioeconomic, and connectivity barriers to access, report, and share information.
Although voice forums have received impassioned usage from low-income, low-literate, rural, tribal, and disabled communities in diverse HCI4D contexts, the participation of women in these services is almost non-existent.