Japan
Discovery Summit
Exploring Data | Inspiring Innovation
Tokyo | November 18, 2016
Abstracts
Triskaidekaphilia
John Sall, Co-Founder and Executive Vice President, SAS
Triskaidekaphilia. This word means “love of the number 13.” With the release of JMP® 13, we plan to make this word meaningful. This session is a tour of some feature highlights of the new release.
The Luck Factor
Richard Wiseman, Professor of the Public Understanding of Psychology, University of Hertfordshire
For many years, psychologist Richard Wiseman has worked with some of the world’s luckiest and unluckiest people. His project, as described in The Luck Factor, scientifically explored why some people live charmed lives. Results demonstrate that lucky people think differently from unlucky people. They are open to new experiences. They are resilient. And they are relaxed enough to see opportunities in the first place.
Wiseman developed four behavioural techniques based on his research, which have enabled others to enhance their own good fortune. The efficacy of these techniques has been scientifically tested in a series of experiments referred to as Luck School, and almost all participants report significant life changes, including increased levels of luck, self-esteem, confidence and success.
As luck would have it, Wiseman agreed to join us at Discovery Summit to share his research on why some people lead happy, successful lives, while others face repeated failure and sadness.
He’ll outline the principles of good luck: maximising chance opportunities; listening to lucky hunches; expecting good fortune; and turning bad luck to good – so that you too can improve your odds in life.
-
The Efforts of Academic-Industrial Workshops Related to the JMP® Clinical RBM Tools
Tenpei Miyaji, Project Assistant Professor, Department of Clinical Trial Data Management, Graduate School of Medicine, The University of Tokyo
Takuhiro Yamaguchi, Project Researcher / Professor, Department of Clinical Trial Data Management, Graduate School of Medicine, The University of Tokyo / Division of Biostatistics, Tohoku University Graduate School of Medicine (Co-Author)
- Industry/Topic: Health Care/Data Visualization/Data Access and Manipulation
- Level: 2
Clinical study methodologies such as risk-based monitoring (RBM) and adaptive design are being adopted throughout the medical research and pharmaceutical development. RBM – a method that involves utilizing the key risk indicators and their thresholds for the signals detection of high risk site and data with central monitoring. Although the methodology has been developed, concrete operation models and results are not sufficient. We used RBM tools in JMP Clinical and conduced academic-industrial workshops presided over by the Department of Clinical Trial Data Management, Graduate School of Medicine, The University of Tokyo, in order to investigate utilization methods in clinical studies. JMP Principal Research Statistician Developer Richard C. Zink created an educational document entitled “Risk-Based Monitoring and Fraud Detection in Clinical Trials Using JMP and SAS” and we held monthly group reading workshops from April 2015 to April 2016. In this presentation, I will introduce the efforts of the workshops in which JMP Clinical and related books were used, the results that were achieved, and the results of the survey given to participants.
- Industry/Topic: Health Care/Data Visualization/Data Access and Manipulation
-
JMP® Applications for Utilizing Big Data on Manufacturing
Nobuo Hara, Staff Engineer, Panasonic Corporation
- Industry/Topic: Manufacturing/JSL Application Development/Data Access and Manipulation
- Level: 3
At Panasonic, we have utilized many resources to make use of data in the semiconductor field for many years, and have been successful at making use of big data in manufacturing. However, manufacturing equipment and product unit values are not nearly as expensive in many other business divisions as they are in the semiconductor field, so the resources that can make use of data have been limited from a cost-effectiveness perspective. In recent years, however, examples of the successful use of big data centered on IT have become widely known, and the utilization of big data in general manufacturing has become desirable. Under such circumstances, my department, which is the companywide production technology support unit, has established three issues for the utilization of big data in manufacturing with limited resources: data acquisition, data extraction, and data analysis. Data analysis involves the development and demonstration of systems that use JSL and OLE in JMP, and this is currently underway. In this presentation, I will explain the issues involved in the use of big data in manufacturing and their resolution, based on the system we have developed.
- Industry/Topic: Manufacturing/JSL Application Development/Data Access and Manipulation
-
A Case Example of the Use of JMP® Statistical Analysis in the Corporate Human Resources Field: A Presentation on Personnel Management and Effectiveness Verification Through the Prediction of Potential Retirement and the Prediction of Future Performance of Employment Applicants
Isamu Ueno, Senior Managing Director, Septeni Holdings (Co-Author)
Tatsuya Shindo, Researcher, Septeni Holdings
- Industry/Topic: Services/Predictive Modeling/Data Visualization
- Level: 1
Information on the "humans" in human resources is managed by companies daily, and a lot of information is accumulated. However, many corporate human resources are not using personnel management, which is a measure for making humans the pillar for improving the value of a company. There are many reasons why: a lot of human resource information is quantified; there is a lot of blank data; data is flagged; it is difficult to tally and analyze data in conventional applications such as Excel. However, JMP eliminates these concerns, drastically improving the analysis efficiency in corporate human resources, which requires many complicated, case-by-case analyses. An example of this is the prediction of potential retirement using partition analysis. We calculated the possibility of retirement within one year with a prediction carried out on 400 subjects, then ranked the data and conducted a management review. One year later, 50 percent of the individuals in the ranked top 10 and 25 percent of the ranked top 50 had retired. Further, amongst these top 50, the two employees with the best performance remained on through retention measures (the result of some explanatory variables). In other recruitment activities, we created a contour map for employees using two types of personality and three performance axes, and plotted applicants on the map to create a future vision. Using discriminant analysis, we strove to determine whether or not the applicants would accept the position after being offered a job and if their future performance would be high, medium, or low. We compile the future predicted results of these applicants into assessment sheets, and use them for executive interviews. The correlation between executive decisions and the compiled data is extremely high, so we are looking into the elimination of interviews in the future. Using JMP in these ways has allowed us to take data on humans that is buried in corporations, and channel it into sources that cultivate corporate value. The use of statistical analysis tools, including JMP, is spreading to the human resources departments of other corporations, and we hope that this will lead to improvements in productivity across Japan.
-
Experiences of Centralized Monitoring Using JMP® Clinical: Promoting Risk-Based Monitoring
Yuichi Fukumasu, Centralized Monitoring Department, Development Strategy Division, A2 Healthcare Corp.
- Industry/Topic: Services Other/Data Visualization/Quality and Reliability
- Level: 1
At A2 Healthcare, we promote risk-based monitoring (RBM) to effectively and efficiently ensure the quality of clinical testing. This presentation will be a report of our experiences with central monitoring using JMP Clinical, which is carried out by the Central Monitoring Department of A2 Healthcare. After central monitoring is conducted, an RBM review meeting is held to confirm the key risk indicators (KRIs). Since adopting JMP Clinical, it has become possible to efficiently create materials used at these meetings. The advantages of using JMP Clinical are as follows:
- STDM, the somewhat default function of central statistical monitoring (CSM), and fraud detection can be executed.
- GUI operation is possible, so the members carrying out central monitoring can do so without writing any programming code.
- By drilling down from charts to confirm the KRIs, interesting case information can be followed.
Central monitoring necessitates that data be viewed from a higher perspective, but by using JMP Clinical, individuals can easily create the charts they need. I will introduce case examples of central monitoring utilizing JMP Clinical, conducted by A2 Healthcare.
- Industry/Topic: Services Other/Data Visualization/Quality and Reliability
-
The Development of Highly Accurate Toxicity Prediction Methods Using the JMP® Machine Learning Function
Yoshihiro Uesawa, Associate Professor of Clinical Pharmaceutics, Meiji Pharmaceutical University
- Industry/Topic: Education/Predictive Modeling/Data Exploration
- Level: 2
Along with the increase in environmental consciousness in recent years, the exhaustive investigation of all of the hazardous chemical substances around us is gaining momentum around the globe. However, there is a vast number of different types of chemical substances, meaning that experimental investigations are not practical from a time perspective, budget perspective, or ethical perspective (animal testing). The academic field of computational toxicology is garnering attention for its ability to predict, statistically and with computer science, the toxicity of chemical substances based on existing information that has been accumulated. In 2014, National Institutes of Health in the United States held the Tox21 Data Challenge 2014, an ambitious computational toxicology competition using toxicity-related data regarding 10,000 compounds. At this competition, which featured over 100 participating teams from 18 countries, competitors used bootstrap forest, one of the machine learning functions in JMP Pro, to analyze a variety of information patterns acquired from chemical structures. This resulted in a departmental victory related to the prediction of female hormone-like substances. In this presentation, I will discuss the selection of the optimal bootstrap forest model based on the flexible functions in JMP that were used during the competition.
Related site : http://http://www.my-pharm.ac.jp/news/info_detail.html?id=599
-
Strategic Optimal Design of Machine Processing With Reverse Engineering
Akira Ogawa, Doctoral Program, Business Administration, Mejiro University Graduate School
Takenori Takahashi, Professor, Business Administration, Mejiro University Graduate School (Co-Author)
- Industry/Topic: Manufacturing/DOE/Predictive Modeling
- Level: 3
When developing a system or product, the difference between the completed system or product and the design specifications should be minimized. This is the typical engineering process attitude. On the other hand, in recent years, attempts have been made to achieve design concepts from the knowledge gained by measuring targets. When measurement results deviate from design specifications, this can have value as a way to obtain useful information, rather than being perceived as a major problem. This is one of the characteristics of reverse engineering, the attitude that measured data can be a starting point. For example, when perforating glass in machine processing, the processing is often evaluated based on the roundness of the perforations; however, it could also be evaluated using a separate index based on the results of the analyzed data, e.g., by defining the polar coordinates of an elliptical shape. When establishing an information processing system, a reverse engineering method is utilized in which important information is retrieved through analysis of the current system and then the information is reflected in the design of the next system. Conventionally, individuals could not perform this type of data analysis; however, thanks to improvements in measuring devices and calculators and advances in analysis programs, researchers and analysts can now analyze in a free-thinking manner to aim for optimal design. In this lecture, I will review the possibility of strategic, optimal design using JMP in light of the current conditions of reverse engineering and statistical data analysis in machine processing. I would also like to discuss the Internet of Things and future issues with participants.
-
A Case Example of Analysis and Design in the Realm of Organizational Human Resources: A Proposal for In-House Measures Based on Multiple Employee Surveys
Sho Kawasaki, Doctoral Program, Business Administration, Mejiro University Graduate School
Takenori Takahashi, Professor, Business Administration, Mejiro University Graduate School (Co-Author)
- Industry/Topic: Cross Industry/Data Exploration/Data Visualization
- Level: 1
In this presentation, I will discuss two types of employee surveys with different purposes – conducted within a company and analyzed simultaneously – and the methodology for deriving specific in-house measures based on the results. Normally, the results of these two surveys are separately processed. However, with the screening principal component regression analysis function in JMP, it is possible to comprehensively analyze and design (propose) multiple surveys. For this case example, we combined two questionnaire surveys conducted by Company A (career measures survey and career values survey), and conducted analysis after splitting the data into a factor group and a results group. Based on the causal relationship results that were obtained, we investigated the potential of items with a strong relationship to the principal component chosen through variable selection. If there was room for growth, we proposed improvement measures, and if not, we proposed maintaining the current situation. A corporation’s organizational human resources data – both quantitative and qualitative – is scattered throughout many areas: employment, training, evaluation, labor, organizational development, etc. A comprehensive perspective, including individual employees, departmental organization, and the entire company, is a reliable way to obtain qualitative information. A statistical approach to quantitative and qualitative information is important even in the realm of organizational human resources and, by using JMP, new knowledge can be discovered through both exploratory and visual means.
-
The Analysis of Variation Factors in Impregnated Retardants to Improve Quality in Retardant Lumber Production
Seiichi Yasui, Assistant Professor, Department of Management Engineering, Faculty of Science, Tokyo University of Science
Yoshifumi Ohmiya, Professor, Department of Architecture, Faculty of Science, Tokyo University of Science (Co-Author)
- Industry/Topic: Manufacturing/Quality and Reliability/Data Visualization
- Level: 1
The use of lumber in various buildings has been expected in recent years from both regional economy and environmental protection perspectives. However, lumber is combustible, and it is necessary to adequately ensure that the materials are fireproof. To that end, pressure injection is performed to impregnate retardants into lumber under high pressure. Lumber treated in this way is referred to as retardant lumber. Under ideal circumstances, the target amount of the retardant is impregnated; however, impregnation variation is widely reported in actual production settings. If a high target value is established in pressure injection processing, the rate of non-conforming products decreases; however, the percentage of excess also increases, which can decrease functionality outside of fire resistance (i.e. quality). Thus, decreasing variation in the amount of retardant that is impregnated through pressure injection is an important issue in the production of retardant lumber. We took actual data related to pressure injection processing (batch treatment) over the past two years from a certain producer, and using JMP to perform exploratory data analysis and analyze impregnation variation factors. We identified location effects and dispersion effects in the batches, and established a statistical model to stipulate processing conditions. With this data, we are validating the effects of the measures that the producer took in the past. Our report shows the relationship between the JMP database operations and analysis tools, and provides a data analysis case example for improving quality in retardant lumber production.
- Industry/Topic: Manufacturing/Quality and Reliability/Data Visualization
-
The Application of Computer-Aided Engineering and Statistical Methods in the Automotive Industry
(An Investigation of NVH Robust Design and Multi-Objective Optimization)
Ichiro Shibata, Technical Marketing Manager, Marketing Division, Altair Engineering
- Industry/Topic: Service/DOE/Predictive Modeling
- Level: 3
Computer-aided engineering (CAE) has become an indispensable tool for the pursuit of automotive NVH (noise, vibration, and harshness), collision safety, and so on. However, generally speaking, CAE is believed to be a deductive and deterministic tool that uniquely quantifies input-output relationships in models and cannot be used for stochastic evaluation, considering the various uncertainties of actual automobiles. That being said, in order to promote prototype-less methods (proactive application of hypothetical prototypes using CAE), which are gaining traction in the automotive industry, stochastic evaluation is important. Through joint research with automotive manufacturers, I use NVH robust design methods to conduct DOE and multiple regression analysis and build probabilistic projection models using simulations in numerical models with JMP. I will introduce a case example of this, as well as a case example of DOE, cluster analysis, neural networks using JMP for pattern recognition and machine learning for the deformation mode control of high-order, nonlinear and indeterminate automotive structures.
-
An Example of Data Acquisition and In-House Efforts to Develop Active Pharmaceutical Ingredients Based on Statistical Methods
Tatsushi Murase, Researcher, Chemical Process R&D, CMC and Production HQ, Ono Pharmaceutical Co.
- Industry/Topic: Pharmaceuticals
- Level: 2
In order to stably supply high-quality pharmaceuticals, my company's active pharmaceutical ingredients department works to establish robust manufacturing processes that can respond to fluctuations in process parameters. Development methods based on Quality by Design (QbD) have been used increasingly in recent years, both in Japan and overseas. In this presentation, I will discuss a case example of efforts made to use statistical methods in the research and development of active pharmaceutical ingredient manufacturing processes aimed at the realization of QbD. In the active pharmaceutical ingredients department, we use statistical methods to coordinate systems that allow for the efficient development of high-quality active pharmaceutical ingredients. We select the appropriate statistical methods for each manufacturing process and efficiently acquire data, statistically optimize each manufacturing parameter, and promote the development of robust manufacturing processes. That is, we use classical experimental design for parameters that have a large effect on the quality of active pharmaceutical ingredients, and aim to create highly accurate predictive models. Even for important parameters in manufacturing that have a low impact on the quality of active pharmaceutical ingredients, we use the new JMP features of definitive screening, custom planning, and multivariate analysis in an effort to quickly acquire data and expand the use of statistical methods. I will introduce an example of this in my presentation.
-
Image Research of Corporate Logo Design Using Multivariate Analysis
Yoko Suzuki, Research Student, Faculty of System Design, Tokyo Metropolitan University
- Industry/Topic: Education/Data Exploration/Data Visualization
- Level: 1
Choosing a design is a difficult process, isn't it? Designs play an important part in our lives, for everything from our personal daily lives in our vehicles, clothing, and interior decorating, to corporate strategies and product development. Generally speaking, image research on corporate designs has not been made public. I would like to conduct research on design at university and make the results public. I use statistical methods to achieve more objective results. In this presentation, I will provide examples of corporate logos. I have extracted images of a variety of both old and new corporate logos using factor analysis, and grouped them using cluster analysis. I also analyze them together with the features of the logos from a graphic design perspective: concrete, abstract and logo type. I would love for those who don't really understand design to participate, whether individuals or corporations.
Related site: http://doi.org/10.5057/ijae.13.133
-
Are You Fully Utilizing the JMP® Graph Builder? Various Graph Builder Tips Learned From Interesting Examples of Graphs
Taku Ogasawara, Systems Engineer, Technical Group, JMP Japan Division, SAS Institute Japan
- Industry/Topic: Services/Data Visualization/Data Exploration
- Level: 2
The JMP Graph Builder is an extremely useful tool for creating interactive graphs. Although graphs can be created just by dragging and dropping variables, are you satisfied by creating graphs with variables simply placed into X drop zones and Y drop zones? With just a little effort, it is possible to make your presentations even more appealing with exploratory data analysis, data visualization, and graphs. In this presentation, I will show examples of attractive graphs from the JMP User Community, and provide various tips on the use of Graph Builder through demonstrations. The main target audience for this presentation is those with beginner and intermediate experience creating graphs in JMP and those who are aware of their difficulties doing so, but there may also be points of interest for more advanced users as well.
-
Various Types of Statistical Analysis Using Maximum Likelihood With Left-Censored Quantification Limits
Yukio Takahashi, President, BioStat Institute Co. Ltd.
- Industry/Topic: Life Sciences/Quality and Reliability
- Level: 4
A lot of measurement data contains limits below or above values that cannot be accurately measured. For such data, an average value or 95 percent confidence interval should be calculated. For the sake of convenience, would it not be better to calculate the average value with half of the lower limit of quantitation, considering it 0, etc.? And how should the upper limit of quantitation be handled? In such cases, the statistical median value and interquartile range function as the average value and standard deviation (SD). However, there are limits to the analyses that can be performed using various statistical models. Although regression analysis should be performed with the measurement limits included, how should this be done? Analysis using maximum likelihood, which includes censoring data for univariate distribution and bivariate relationship, is now possible using Life Distribution and Fit Life by X platforms in JMP. In these platforms, not only is right censoring performed, but left censoring and interval censoring are also performed simultaneously. These platforms are multifunctional and require sensitive settings, but the instruction manual does not include examples of data analysis that includes quantification limits. I will be presenting analysis methods and results with various data examples.
-
The Individualization of Circulating Cancer Cells in the Blood of Cancer Patients Using JMP®
Hiroaki Ito, Associate Professor, Department of Surgery, Digestive Diseases Center, Showa University Koto Toyosu Hospital
- Industry/Topic: Life Sciences/Data Visualization/Data Exploration
- Level: 3
As a gastroenterological surgeon, I am mostly involved in the surgical treatment of esophageal cancer and stomach cancer, but I also continue translational research on micrometastasis and early cancer diagnosis. Circulating tumor cells, which isolate from primary tumor and circulate in the blood of cancer patients are believed to be the main cause of hematogenous cancer metastasis. In 2004, I clarified the relationship between circulating cancer cells, cancer progression, and prognosis in patients with esophageal cancer. I subsequently identified similar results for stomach cancer, and continued my research with the aim of developing methods for early cancer diagnosis and new metastasis suppression treatments. I recently reported that treatment changes the cellular diameter of circulating cancer cells. I also developed a technique for detecting and sampling viable circulating cancer cells without labelling. JMP is an indispensable and powerful tool for characterizing of individual circulating tumor cells which are extremely important samples in cancer research, and for analyzing the relationship between circulating tumor cells and patients’ prognosis. In this presentation, I will demonstrate the procedure about how JMP can be used to analyze complex clinical data.
Related site : http://www10.showa-u.ac.jp/~ddc-kt/Page_03_03_research.html
- Industry/Topic: Life Sciences/Data Visualization/Data Exploration
-
Shaping Up Big Data? A Data Workout With JMP®
Michele Boulanger, Professor, International Business Department, Rollins College, Orlando
Mia Stephens, JMP Academic Ambassador, SAS (Co-Author)
- Industry/Topic: Cross Industry/Data Exploration/Predictive Modeling
- Level: 2
Our ability to capture transactional data in most fields has led us to the era of “big data”. What is big data? Does “big” necessarily mean dirty, messy, inconsistent, or unwieldy? How much toning and conditioning do we need to do in the big data world? What differentiates preparation in the big data world from the traditional data cleaning phase? In this talk we discuss the different challenges encountered in potentially the most time-consuming phase of big data analytics: data preparation. We present two case studies with very different goals, requiring different approaches to shaping up the data for modeling. Along with these approaches, we also highlight techniques and platforms from JMP such as query, recode, standardization, transformation, imputation, text mining, and others to develop a traceable and reproducible methodology to prepare big data for the modeling phase. All demonstrations will be done live with JMP Pro 13.
-
The Visualization and Analysis of Live Birth Data Using JMP®: Advancing Analysis From Date of Birth Ranking
Naohiro Masukawa, Systems Engineer, Technical Group, JMP Japan Division, SAS Institute Japan
- Industry/Topic: Cross Industry/Data Visualization/Data Exploration
- Level: 4
The question "What is the probability of two people in the same class sharing the same birthday?" is a famous probability question, but this probability is normally calculated as equally probable for all of the days in a year. However, there are more live births on Christmas, December 25, than on other days, and fewer live births during the first three days of the new year, January 1 to 3. In other words, there are biases in the number of live births on certain days and their surrounding days. For that reason, there is a "date of birth ranking" in which the days are ranked in order of the number of live births. In this presentation, I will use survey data regarding live births after 1996 from the Ministry of Health, Labour and Welfare's Population Survey Report. By understanding the days and months in which there are live birth biases – visualizing the number of live births per day and calculating and comparing these live birth distribution biases per year – the causes of the biases can be considered. Further, by considering the transitions in live births by year and biases in the number of live births by month, a statistical model that explains the number of live births by year, month, and day can be established, and annual and monthly impacts can be considered. In particular, I will explain trial-and-error results for the visualization of live births through a JMP demonstration.
-
Using JMP® Pro 12 to Investigate Designs Arising From the DOE Custom Design Platform
Mark Johnson, Professor of Statistics, University of Central Florida
Seiichi Yasui, Assistant Professor, Department of Industrial Administration, Faculty of Science and Technology, Tokyo University of Science (Co-Author)
- Industry/Topic: Manufacturing/DOE/JSL Application Development
- Level: 2
The Custom Design platform within DOE in JMP Pro12 can yield 27 distinct possibilities for two-level, sixteen run fractional factorial designs. In this presentation we investigate properties of various designs using JMP itself and provide some JSL scripts to facilitate the analysis. One such property concerns the span of the vectors consisting of all main effects and two factor interactions for a given design (namely the vectors I,A,B,…,F,AB,AC,…,EF). Somewhat surprisingly, the “gold-standard” resolution IV design (using generators E=ABC and F=BCD with a full factorial in A, B, C and D) spans only 14 of the 16 possible dimensions. This deficiency implies that the gold standard design will be incapable of modeling some response vectors (specifically, those in the span of the vectors ABD and ACD for this case). In contrast, the “inferior” resolution III design based on generators E=AB and F=CD is capable of spanning the full 16 dimensions of possible response vectors, and is thus much preferred to the gold standard in certain circumstances. Some non-regular designs among the 27 Custom Design possibilities also enjoy full dimensional spanning capability while being on par with other properties of designs (E(s2) and trace(AA’)) as noted by Lu, Johnson and Anderson-Cook (Quality Engineering, 2014). For example, the design with generators E=ABC and F=1/2[CD+ACD+BCD-ABCD] spans 16 dimensions. In examining the non-regular designs emanating from Custom Design, it is important to identify all the full factorial projections from the main effect columns. A JSL script is presented to facilitate the effort. The spanning property deficiency will be shown to occur as well for a 32 run, resolution V design.
-
Conjoint Analysis Using JMP® and Case Examples of Its Application
Masahiro Arima, Professor, Graduate School of Applied Informatics, University of Hyogo
- Industry/Topic: Education/Data Exploration/JSL Application Development
- Level: 2
Preference structures for consumer products and services and preference structures for local community measures can be clarified from stated preference data in questionnaires, web surveys and so on, and conjoint analysis (choice experiment) is one method for doing this. By extracting the important attributes that constitute products, services and measures, and establishing several levels for each of these attributes, these attribute levels can be combined to develop imaginary product, service and measure profiles that can indicate preference structures through methods such as ordering, five grade evaluation, dichotomous choice and so on. A model such as a rank logit model, multinomial logit model or binomial logit model can be applied to estimate partial utility values (the coefficients of each level of each attribute), and to estimate the importance of the attributes and the desirability of the levels. In this presentation, I will show the usefulness of conjoint analysis in marketing activities and policymaking processes, aimed at the expansion of JMP users and the range of use. I will also introduce case examples of the application of conjoint analysis in a web survey that was conducted across the country and an inhabitant consciousness survey carried out by a reporter in Tatsuno City and Miki City, Hyogo Prefecture. I will also introduce some tips and hints to utilize JMP effectively in coducting conjoint analysis. Methods to generate profiles with screening designs and custom designs, steps for estimating partial utility values with logit models and some JSL scripts which eases the process of actually conducting conjoint analysis will be presented and attendees would get some know-how regarding conjoint analysis.
-
Learn why regulatory agencies, pharmaceutical companies and CROs worldwide are choosing JMP Clinical.
Creating and sharing advanced user interfaces with large teams using JMP Clinical for clinical trial review.SAS Institute Inc., JMP Health and Life Sciences
JMP Product Manager, Geoffrey Mann, Ph.D.
Principal Software Developer (Genomics), Drew Foglia
- Industry/Topic: Life Sciences /Data Visualization / Quality & Reliability
- Level: 2
Learn why regulatory agencies, pharmaceutical companies and CROs worldwide are choosing JMP Clinical.
We will demonstrate how clinical operations and biostatistics groups can evaluate global clinical trials for patient, site, investigator and monitor anomalies. Using a variety of reports from JMP Clinical, including Data Integrity, Risk-based monitoring and even patient profiles to evaluate the ongoing health of global clinical trials. We will also show how medical writers, medical monitors and medical reviewers can create safety and efficacy content for clinical study reports and clinical review reports in a fraction of the time it used to take them.
Creating and sharing advanced user interfaces with large teams using JMP Clinical for clinical trial review.
Learn how to create sophisticated dashboards for clinical, operational and biostats teams in JMP Clinical within minutes and share these reports and notes with users in multi-disciplinary teams involved in the review process of clinical trials data. Deploy these reports to users on desktops, virtualization technologies, SAS Drug Development or Life Sciences Analytics Framework as well as newer technologies such as Google Drive.
- Industry/Topic: Life Sciences /Data Visualization / Quality & Reliability
- Beginner: 1
- Intermediate: 2
- Advanced: 3
- Power user: 4
Innovative Thermal Transport Modeling of Fusion Plasma Using JMP®
Masayuki Yokoyama, Professor, National Institute for Fusion Science
- Industry/Topic: Education/Data Exploration/JSL Application Development
- Level: 2
I have attempted modeling of thermal transport properties in fusion plasmas using a "big data" approach based on the results of several experiments and relevant analyses. This research, which strives to use JMP to view the tens of thousands of plasma experiments that have been conducted as a source of big data, differs in its fundamental idea from conventional modeling of the thermal transport properties based on physics mechanisms. If this research is successful, we will be able to present an effective means of modeling of thermal transport properties across a broad range of plasma parameters without any awareness of types of fluctuations that cause turbulent transport and collisional thermal transport, etc. Moreover, we will be able to predict parameters such as temperatures in fusion reactors rather easily and in a short period of time for the effective reactor operation and control.
Separation Prediction Based on Statistical Retention Modeling
for Simultaneous Optimization of Aqueous Phase pH and Organic Phase Composition
Tsukasa Sasaki, Researcher, Analytical & Quality Evaluation Research Laboratories, Pharmaceutical Technology Division, Daiichi Sankyo
- Industry/Topic: Education/Data Exploration/JSL Application Development
- Level: 2
Reversed-phase liquid chromatography (Rp-HPLC) has been recognized as an extremely important analytical methodology in recent years in the field of quality assessment of low-molecular drugs, since it can be used to isolate components from complicated mixture and quantitate them with favorable reproducibility. Generally, optimization and refinement of analytical condition in Rp-HPLC is performed stepwisely according to the physicochemical properties of the targeted compounds. However, since the final separation condition is decided based on the effects of several factors relate to the mobile phase’s properties, a trial-and-error approach is not practical for searching out the optimal setpoint. In this presentation, we’ll report research about a simultaneous optimization approach by using multiple linear regression analysis and artificial neural network analysis on the composition of the organic portion and the pH value of the aqueous portion of mobile phase.
JMP® Applications for Utilizing Big Data on Manufacturing
Nobuo Hara, Staff Engineer, Panasonic Corporation
- Industry/Topic: Education/Data Exploration/JSL Application Development
- Level: 2
At Panasonic, we have utilized many resources to make use of data in the semiconductor field for many years, and have been successful at making use of big data in manufacturing. However, manufacturing equipment and product unit values are not nearly as expensive in many other business divisions as they are in the semiconductor field, so the resources that can make use of data have been limited from a cost-effectiveness perspective. In recent years, however, examples of the successful use of big data centered on IT have become widely known, and the utilization of big data in general manufacturing has become desirable. Under such circumstances, my department, which is the companywide production technology support unit, has established three issues for the utilization of big data in manufacturing with limited resources: data acquisition, data extraction, and data analysis. Data analysis involves the development and demonstration of systems that use JSL and OLE in JMP, and this is currently underway. In this presentation, I will explain the issues involved in the use of big data in manufacturing and their resolution, based on the system we have developed.
An Encouragement of Robust Parameter Design (From KKD to Science)
Tadashi Mitsui, Chief Specialist, Storage & Electronic Device Solutions Company Center for Semiconductor Research & Development, Semiconductor Research Planning & Coordination Department, Toshiba Corporation
- Industry/Topic: Education/Data Exploration/JSL Application Development
- Level: 2
In a mass production manufacturing process, robust design is regarded as important, especially for process parameter variation. Nevertheless, it is rarely carried out because of requirements for large experimental resources. Conventional methods allow the use of a noise matrix as an outer array, along with orthogonal arrays. The total number of experiments increases by a lot, except in the case of uniformity optimization where noise factor is a measurement sampling position. In this study, we propose a cost-effective method for robust design with the use of custom design.
A Consideration of Differences in Average Public and Private Saving Rates by Generation and Current Income
Kenichiro Tanaka, Doctoral Student, Graduate School of Applied Informatics, University of Hyogo
- Industry/Topic: General/Data Exploration/Data Visualization
- Level: 1
In order to consider what sorts of differences might occur in the saving rates of individuals working in the public sector (public) and individuals working in the private sector (private), we used pseudo-microdata for educational purposes provided by the National Statistics Center and analyzed it with the JMP statistical analysis software. Although it was clear that the average public saving rate was higher than the private, an assay of differences in saving rates by five-year classes and five classes of current income revealed that neither was necessarily higher. Further, a common point between public and private was that the peak average saving rates of both were in the 35-39 range, began to decline after age 40, then generally recovered before the official retirement age, in the 55-59 bracket. We learned that, in households of two or more individuals believed to be married, it is essential to save for retirement during the periods in which that is possible, regardless of children’s educational expenses.
The Actual State of Family Budgets in Single-Mother Households: A Comparison of Single-Income and Double-Income Households
Keirei Ka, Ji-in Toh and Keijo Lee, Doctoral Student, Graduate School of Applied Informatics, University of Hyogo
- Industry/Topic: General/Data Exploration/Data Visualization
- Level: 1
According to the 2011 Nationwide Survey on Fatherless Families, there are an estimated 1,238,000 single-mother households across Japan. We used pseudo-microdata for educational purposes provided by the National Statistics Center to analyze the economic conditions and lives of these households, with a particular focus on Engel’s coefficient and the ratio of spending on home-cooked meals, home-meal replacements, and eating out. The aim was to analyze the data with a focus on differences in the lives of single-mother households and general population married-couple households (both single-income and double-income households).